00:00:00.001 Started by upstream project "autotest-per-patch" build number 132759 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.022 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.023 The recommended git tool is: git 00:00:00.024 using credential 00000000-0000-0000-0000-000000000002 00:00:00.026 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.040 Fetching changes from the remote Git repository 00:00:00.045 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.067 Using shallow fetch with depth 1 00:00:00.067 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.067 > git --version # timeout=10 00:00:00.087 > git --version # 'git version 2.39.2' 00:00:00.087 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.124 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.124 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.440 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.451 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.470 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.470 > git config core.sparsecheckout # timeout=10 00:00:03.482 > git read-tree -mu HEAD # timeout=10 00:00:03.500 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.529 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.529 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.612 [Pipeline] Start of Pipeline 00:00:03.627 [Pipeline] library 00:00:03.629 Loading library shm_lib@master 00:00:03.629 Library shm_lib@master is cached. Copying from home. 00:00:03.646 [Pipeline] node 00:00:03.658 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:03.659 [Pipeline] { 00:00:03.669 [Pipeline] catchError 00:00:03.671 [Pipeline] { 00:00:03.699 [Pipeline] wrap 00:00:03.732 [Pipeline] { 00:00:03.746 [Pipeline] stage 00:00:03.748 [Pipeline] { (Prologue) 00:00:03.763 [Pipeline] echo 00:00:03.765 Node: VM-host-WFP7 00:00:03.770 [Pipeline] cleanWs 00:00:03.777 [WS-CLEANUP] Deleting project workspace... 00:00:03.777 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.782 [WS-CLEANUP] done 00:00:03.964 [Pipeline] setCustomBuildProperty 00:00:04.045 [Pipeline] httpRequest 00:00:04.406 [Pipeline] echo 00:00:04.408 Sorcerer 10.211.164.101 is alive 00:00:04.415 [Pipeline] retry 00:00:04.416 [Pipeline] { 00:00:04.429 [Pipeline] httpRequest 00:00:04.434 HttpMethod: GET 00:00:04.434 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.435 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.436 Response Code: HTTP/1.1 200 OK 00:00:04.437 Success: Status code 200 is in the accepted range: 200,404 00:00:04.437 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.724 [Pipeline] } 00:00:04.739 [Pipeline] // retry 00:00:04.746 [Pipeline] sh 00:00:05.031 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.047 [Pipeline] httpRequest 00:00:06.958 [Pipeline] echo 00:00:06.960 Sorcerer 10.211.164.101 is alive 00:00:06.969 [Pipeline] retry 00:00:06.971 [Pipeline] { 00:00:06.984 [Pipeline] httpRequest 00:00:06.988 HttpMethod: GET 00:00:06.988 URL: http://10.211.164.101/packages/spdk_dd2b3744d23b761359866b18443edc70dcb8677e.tar.gz 00:00:06.989 Sending request to url: http://10.211.164.101/packages/spdk_dd2b3744d23b761359866b18443edc70dcb8677e.tar.gz 00:00:06.992 Response Code: HTTP/1.1 200 OK 00:00:06.993 Success: Status code 200 is in the accepted range: 200,404 00:00:06.993 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_dd2b3744d23b761359866b18443edc70dcb8677e.tar.gz 00:00:27.218 [Pipeline] } 00:00:27.234 [Pipeline] // retry 00:00:27.241 [Pipeline] sh 00:00:27.522 + tar --no-same-owner -xf spdk_dd2b3744d23b761359866b18443edc70dcb8677e.tar.gz 00:00:30.069 [Pipeline] sh 00:00:30.360 + git -C spdk log --oneline -n5 00:00:30.360 dd2b3744d bdev/compress: Simplify split logic for unmap operation 00:00:30.360 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:00:30.360 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:00:30.360 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:00:30.360 e2dfdf06c accel/mlx5: Register post_poller handler 00:00:30.426 [Pipeline] writeFile 00:00:30.442 [Pipeline] sh 00:00:30.727 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:30.741 [Pipeline] sh 00:00:31.031 + cat autorun-spdk.conf 00:00:31.031 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.031 SPDK_RUN_ASAN=1 00:00:31.031 SPDK_RUN_UBSAN=1 00:00:31.031 SPDK_TEST_RAID=1 00:00:31.031 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.039 RUN_NIGHTLY=0 00:00:31.041 [Pipeline] } 00:00:31.058 [Pipeline] // stage 00:00:31.078 [Pipeline] stage 00:00:31.081 [Pipeline] { (Run VM) 00:00:31.097 [Pipeline] sh 00:00:31.382 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:31.382 + echo 'Start stage prepare_nvme.sh' 00:00:31.382 Start stage prepare_nvme.sh 00:00:31.382 + [[ -n 0 ]] 00:00:31.382 + disk_prefix=ex0 00:00:31.382 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:00:31.382 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:00:31.382 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:00:31.382 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.383 ++ SPDK_RUN_ASAN=1 00:00:31.383 ++ SPDK_RUN_UBSAN=1 00:00:31.383 ++ SPDK_TEST_RAID=1 00:00:31.383 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.383 ++ RUN_NIGHTLY=0 00:00:31.383 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:00:31.383 + nvme_files=() 00:00:31.383 + declare -A nvme_files 00:00:31.383 + backend_dir=/var/lib/libvirt/images/backends 00:00:31.383 + nvme_files['nvme.img']=5G 00:00:31.383 + nvme_files['nvme-cmb.img']=5G 00:00:31.383 + nvme_files['nvme-multi0.img']=4G 00:00:31.383 + nvme_files['nvme-multi1.img']=4G 00:00:31.383 + nvme_files['nvme-multi2.img']=4G 00:00:31.383 + nvme_files['nvme-openstack.img']=8G 00:00:31.383 + nvme_files['nvme-zns.img']=5G 00:00:31.383 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:31.383 + (( SPDK_TEST_FTL == 1 )) 00:00:31.383 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:31.383 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:31.383 + for nvme in "${!nvme_files[@]}" 00:00:31.383 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:31.383 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.383 + for nvme in "${!nvme_files[@]}" 00:00:31.383 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:31.383 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.383 + for nvme in "${!nvme_files[@]}" 00:00:31.383 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:31.383 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:31.383 + for nvme in "${!nvme_files[@]}" 00:00:31.383 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:31.383 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.383 + for nvme in "${!nvme_files[@]}" 00:00:31.383 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:31.383 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.383 + for nvme in "${!nvme_files[@]}" 00:00:31.383 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:31.383 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.383 + for nvme in "${!nvme_files[@]}" 00:00:31.383 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:31.643 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.643 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:31.643 + echo 'End stage prepare_nvme.sh' 00:00:31.643 End stage prepare_nvme.sh 00:00:31.656 [Pipeline] sh 00:00:31.945 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:31.945 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:00:31.945 00:00:31.945 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:00:31.945 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:00:31.945 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:00:31.945 HELP=0 00:00:31.945 DRY_RUN=0 00:00:31.945 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:00:31.945 NVME_DISKS_TYPE=nvme,nvme, 00:00:31.945 NVME_AUTO_CREATE=0 00:00:31.945 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:00:31.945 NVME_CMB=,, 00:00:31.945 NVME_PMR=,, 00:00:31.945 NVME_ZNS=,, 00:00:31.945 NVME_MS=,, 00:00:31.945 NVME_FDP=,, 00:00:31.945 SPDK_VAGRANT_DISTRO=fedora39 00:00:31.945 SPDK_VAGRANT_VMCPU=10 00:00:31.945 SPDK_VAGRANT_VMRAM=12288 00:00:31.945 SPDK_VAGRANT_PROVIDER=libvirt 00:00:31.945 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:31.945 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:31.945 SPDK_OPENSTACK_NETWORK=0 00:00:31.945 VAGRANT_PACKAGE_BOX=0 00:00:31.945 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:31.945 FORCE_DISTRO=true 00:00:31.945 VAGRANT_BOX_VERSION= 00:00:31.945 EXTRA_VAGRANTFILES= 00:00:31.945 NIC_MODEL=virtio 00:00:31.945 00:00:31.945 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:00:31.945 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:00:34.511 Bringing machine 'default' up with 'libvirt' provider... 00:00:34.511 ==> default: Creating image (snapshot of base box volume). 00:00:34.772 ==> default: Creating domain with the following settings... 00:00:34.772 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733528085_18c721e83ea3b857ad2f 00:00:34.772 ==> default: -- Domain type: kvm 00:00:34.772 ==> default: -- Cpus: 10 00:00:34.772 ==> default: -- Feature: acpi 00:00:34.772 ==> default: -- Feature: apic 00:00:34.772 ==> default: -- Feature: pae 00:00:34.772 ==> default: -- Memory: 12288M 00:00:34.772 ==> default: -- Memory Backing: hugepages: 00:00:34.772 ==> default: -- Management MAC: 00:00:34.772 ==> default: -- Loader: 00:00:34.772 ==> default: -- Nvram: 00:00:34.772 ==> default: -- Base box: spdk/fedora39 00:00:34.772 ==> default: -- Storage pool: default 00:00:34.772 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733528085_18c721e83ea3b857ad2f.img (20G) 00:00:34.772 ==> default: -- Volume Cache: default 00:00:34.772 ==> default: -- Kernel: 00:00:34.772 ==> default: -- Initrd: 00:00:34.772 ==> default: -- Graphics Type: vnc 00:00:34.772 ==> default: -- Graphics Port: -1 00:00:34.772 ==> default: -- Graphics IP: 127.0.0.1 00:00:34.772 ==> default: -- Graphics Password: Not defined 00:00:34.772 ==> default: -- Video Type: cirrus 00:00:34.772 ==> default: -- Video VRAM: 9216 00:00:34.772 ==> default: -- Sound Type: 00:00:34.772 ==> default: -- Keymap: en-us 00:00:34.772 ==> default: -- TPM Path: 00:00:34.772 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:34.772 ==> default: -- Command line args: 00:00:34.772 ==> default: -> value=-device, 00:00:34.772 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:34.772 ==> default: -> value=-drive, 00:00:34.772 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:00:34.772 ==> default: -> value=-device, 00:00:34.772 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.772 ==> default: -> value=-device, 00:00:34.772 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:34.772 ==> default: -> value=-drive, 00:00:34.772 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:34.772 ==> default: -> value=-device, 00:00:34.772 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.772 ==> default: -> value=-drive, 00:00:34.772 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:34.772 ==> default: -> value=-device, 00:00:34.772 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.772 ==> default: -> value=-drive, 00:00:34.772 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:34.772 ==> default: -> value=-device, 00:00:34.772 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.772 ==> default: Creating shared folders metadata... 00:00:34.772 ==> default: Starting domain. 00:00:35.838 ==> default: Waiting for domain to get an IP address... 00:00:53.971 ==> default: Waiting for SSH to become available... 00:00:53.971 ==> default: Configuring and enabling network interfaces... 00:00:59.251 default: SSH address: 192.168.121.36:22 00:00:59.251 default: SSH username: vagrant 00:00:59.251 default: SSH auth method: private key 00:01:01.819 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:09.953 ==> default: Mounting SSHFS shared folder... 00:01:11.856 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:11.856 ==> default: Checking Mount.. 00:01:13.235 ==> default: Folder Successfully Mounted! 00:01:13.235 ==> default: Running provisioner: file... 00:01:14.613 default: ~/.gitconfig => .gitconfig 00:01:14.873 00:01:14.873 SUCCESS! 00:01:14.873 00:01:14.873 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:14.873 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:14.873 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:14.873 00:01:14.882 [Pipeline] } 00:01:14.898 [Pipeline] // stage 00:01:14.907 [Pipeline] dir 00:01:14.908 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:01:14.910 [Pipeline] { 00:01:14.923 [Pipeline] catchError 00:01:14.925 [Pipeline] { 00:01:14.938 [Pipeline] sh 00:01:15.222 + vagrant ssh-config --host vagrant 00:01:15.222 + sed -ne /^Host/,$p 00:01:15.222 + tee ssh_conf 00:01:17.759 Host vagrant 00:01:17.759 HostName 192.168.121.36 00:01:17.759 User vagrant 00:01:17.759 Port 22 00:01:17.759 UserKnownHostsFile /dev/null 00:01:17.759 StrictHostKeyChecking no 00:01:17.759 PasswordAuthentication no 00:01:17.759 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:17.759 IdentitiesOnly yes 00:01:17.759 LogLevel FATAL 00:01:17.759 ForwardAgent yes 00:01:17.759 ForwardX11 yes 00:01:17.759 00:01:17.773 [Pipeline] withEnv 00:01:17.776 [Pipeline] { 00:01:17.789 [Pipeline] sh 00:01:18.071 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:18.071 source /etc/os-release 00:01:18.071 [[ -e /image.version ]] && img=$(< /image.version) 00:01:18.071 # Minimal, systemd-like check. 00:01:18.071 if [[ -e /.dockerenv ]]; then 00:01:18.071 # Clear garbage from the node's name: 00:01:18.071 # agt-er_autotest_547-896 -> autotest_547-896 00:01:18.071 # $HOSTNAME is the actual container id 00:01:18.071 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:18.071 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:18.071 # We can assume this is a mount from a host where container is running, 00:01:18.071 # so fetch its hostname to easily identify the target swarm worker. 00:01:18.071 container="$(< /etc/hostname) ($agent)" 00:01:18.071 else 00:01:18.071 # Fallback 00:01:18.071 container=$agent 00:01:18.071 fi 00:01:18.071 fi 00:01:18.071 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:18.071 00:01:18.342 [Pipeline] } 00:01:18.359 [Pipeline] // withEnv 00:01:18.368 [Pipeline] setCustomBuildProperty 00:01:18.384 [Pipeline] stage 00:01:18.386 [Pipeline] { (Tests) 00:01:18.403 [Pipeline] sh 00:01:18.686 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:18.958 [Pipeline] sh 00:01:19.237 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:19.512 [Pipeline] timeout 00:01:19.512 Timeout set to expire in 1 hr 30 min 00:01:19.514 [Pipeline] { 00:01:19.529 [Pipeline] sh 00:01:19.811 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:20.379 HEAD is now at dd2b3744d bdev/compress: Simplify split logic for unmap operation 00:01:20.390 [Pipeline] sh 00:01:20.671 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:20.943 [Pipeline] sh 00:01:21.224 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:21.499 [Pipeline] sh 00:01:21.780 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:22.039 ++ readlink -f spdk_repo 00:01:22.039 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:22.039 + [[ -n /home/vagrant/spdk_repo ]] 00:01:22.039 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:22.039 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:22.039 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:22.039 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:22.039 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:22.039 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:22.039 + cd /home/vagrant/spdk_repo 00:01:22.039 + source /etc/os-release 00:01:22.039 ++ NAME='Fedora Linux' 00:01:22.039 ++ VERSION='39 (Cloud Edition)' 00:01:22.039 ++ ID=fedora 00:01:22.039 ++ VERSION_ID=39 00:01:22.039 ++ VERSION_CODENAME= 00:01:22.039 ++ PLATFORM_ID=platform:f39 00:01:22.039 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:22.039 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:22.039 ++ LOGO=fedora-logo-icon 00:01:22.039 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:22.039 ++ HOME_URL=https://fedoraproject.org/ 00:01:22.039 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:22.039 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:22.039 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:22.039 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:22.039 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:22.039 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:22.039 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:22.039 ++ SUPPORT_END=2024-11-12 00:01:22.039 ++ VARIANT='Cloud Edition' 00:01:22.039 ++ VARIANT_ID=cloud 00:01:22.039 + uname -a 00:01:22.039 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:22.039 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:22.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:22.605 Hugepages 00:01:22.605 node hugesize free / total 00:01:22.605 node0 1048576kB 0 / 0 00:01:22.605 node0 2048kB 0 / 0 00:01:22.605 00:01:22.605 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.605 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:22.605 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:22.605 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:22.605 + rm -f /tmp/spdk-ld-path 00:01:22.605 + source autorun-spdk.conf 00:01:22.605 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.605 ++ SPDK_RUN_ASAN=1 00:01:22.605 ++ SPDK_RUN_UBSAN=1 00:01:22.605 ++ SPDK_TEST_RAID=1 00:01:22.605 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.605 ++ RUN_NIGHTLY=0 00:01:22.605 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.605 + [[ -n '' ]] 00:01:22.605 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:22.605 + for M in /var/spdk/build-*-manifest.txt 00:01:22.605 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:22.605 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.605 + for M in /var/spdk/build-*-manifest.txt 00:01:22.605 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.605 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.605 + for M in /var/spdk/build-*-manifest.txt 00:01:22.605 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.605 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.605 ++ uname 00:01:22.605 + [[ Linux == \L\i\n\u\x ]] 00:01:22.605 + sudo dmesg -T 00:01:22.863 + sudo dmesg --clear 00:01:22.863 + dmesg_pid=5425 00:01:22.863 + [[ Fedora Linux == FreeBSD ]] 00:01:22.863 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.863 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.863 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.863 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.863 + sudo dmesg -Tw 00:01:22.863 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.863 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.863 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.863 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.863 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.863 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.863 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.863 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.863 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.863 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.863 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.863 23:35:34 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:22.863 23:35:34 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.863 23:35:34 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.863 23:35:34 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:22.863 23:35:34 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:22.863 23:35:34 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:22.863 23:35:34 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.863 23:35:34 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:22.863 23:35:34 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:22.863 23:35:34 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.863 23:35:34 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:22.863 23:35:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:22.863 23:35:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:22.864 23:35:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.864 23:35:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.864 23:35:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.864 23:35:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.864 23:35:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.864 23:35:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.864 23:35:34 -- paths/export.sh@5 -- $ export PATH 00:01:22.864 23:35:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.864 23:35:34 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:22.864 23:35:34 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:23.122 23:35:34 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733528134.XXXXXX 00:01:23.122 23:35:34 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733528134.FUSGds 00:01:23.122 23:35:34 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:23.122 23:35:34 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:23.122 23:35:34 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:23.122 23:35:34 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:23.122 23:35:34 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.122 23:35:34 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:23.122 23:35:34 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:23.122 23:35:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.122 23:35:34 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:23.122 23:35:34 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:23.122 23:35:34 -- pm/common@17 -- $ local monitor 00:01:23.122 23:35:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.122 23:35:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.122 23:35:34 -- pm/common@21 -- $ date +%s 00:01:23.122 23:35:34 -- pm/common@25 -- $ sleep 1 00:01:23.122 23:35:34 -- pm/common@21 -- $ date +%s 00:01:23.122 23:35:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733528134 00:01:23.122 23:35:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733528134 00:01:23.122 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733528134_collect-cpu-load.pm.log 00:01:23.122 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733528134_collect-vmstat.pm.log 00:01:24.092 23:35:35 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:24.092 23:35:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.092 23:35:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.092 23:35:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:24.092 23:35:35 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.092 Fri Dec 6 11:35:35 PM UTC 2024 00:01:24.092 23:35:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.092 v25.01-pre-304-gdd2b3744d 00:01:24.092 23:35:35 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:24.092 23:35:35 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:24.092 23:35:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.092 23:35:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.092 23:35:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.092 ************************************ 00:01:24.092 START TEST asan 00:01:24.092 ************************************ 00:01:24.092 using asan 00:01:24.092 23:35:35 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:24.092 00:01:24.092 real 0m0.000s 00:01:24.092 user 0m0.000s 00:01:24.092 sys 0m0.000s 00:01:24.092 23:35:35 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.093 23:35:35 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.093 ************************************ 00:01:24.093 END TEST asan 00:01:24.093 ************************************ 00:01:24.093 23:35:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.093 23:35:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.093 23:35:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.093 23:35:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.093 23:35:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.093 ************************************ 00:01:24.093 START TEST ubsan 00:01:24.093 ************************************ 00:01:24.093 using ubsan 00:01:24.093 23:35:35 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:24.093 00:01:24.093 real 0m0.000s 00:01:24.093 user 0m0.000s 00:01:24.093 sys 0m0.000s 00:01:24.093 23:35:35 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.093 23:35:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.093 ************************************ 00:01:24.093 END TEST ubsan 00:01:24.093 ************************************ 00:01:24.093 23:35:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.093 23:35:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.093 23:35:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.093 23:35:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.093 23:35:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.093 23:35:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.093 23:35:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.093 23:35:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.093 23:35:35 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:24.348 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:24.348 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:24.914 Using 'verbs' RDMA provider 00:01:40.728 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:55.674 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:56.502 Creating mk/config.mk...done. 00:01:56.502 Creating mk/cc.flags.mk...done. 00:01:56.502 Type 'make' to build. 00:01:56.502 23:36:07 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:56.502 23:36:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.502 23:36:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.502 23:36:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.502 ************************************ 00:01:56.502 START TEST make 00:01:56.502 ************************************ 00:01:56.502 23:36:07 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:57.071 make[1]: Nothing to be done for 'all'. 00:02:07.130 The Meson build system 00:02:07.130 Version: 1.5.0 00:02:07.130 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:07.130 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:07.130 Build type: native build 00:02:07.130 Program cat found: YES (/usr/bin/cat) 00:02:07.130 Project name: DPDK 00:02:07.130 Project version: 24.03.0 00:02:07.130 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:07.130 C linker for the host machine: cc ld.bfd 2.40-14 00:02:07.130 Host machine cpu family: x86_64 00:02:07.130 Host machine cpu: x86_64 00:02:07.130 Message: ## Building in Developer Mode ## 00:02:07.130 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:07.130 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:07.130 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:07.130 Program python3 found: YES (/usr/bin/python3) 00:02:07.130 Program cat found: YES (/usr/bin/cat) 00:02:07.130 Compiler for C supports arguments -march=native: YES 00:02:07.130 Checking for size of "void *" : 8 00:02:07.130 Checking for size of "void *" : 8 (cached) 00:02:07.130 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:07.130 Library m found: YES 00:02:07.130 Library numa found: YES 00:02:07.130 Has header "numaif.h" : YES 00:02:07.130 Library fdt found: NO 00:02:07.130 Library execinfo found: NO 00:02:07.130 Has header "execinfo.h" : YES 00:02:07.130 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:07.130 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:07.130 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:07.130 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:07.130 Run-time dependency openssl found: YES 3.1.1 00:02:07.130 Run-time dependency libpcap found: YES 1.10.4 00:02:07.130 Has header "pcap.h" with dependency libpcap: YES 00:02:07.130 Compiler for C supports arguments -Wcast-qual: YES 00:02:07.130 Compiler for C supports arguments -Wdeprecated: YES 00:02:07.130 Compiler for C supports arguments -Wformat: YES 00:02:07.130 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:07.130 Compiler for C supports arguments -Wformat-security: NO 00:02:07.130 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.130 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:07.130 Compiler for C supports arguments -Wnested-externs: YES 00:02:07.130 Compiler for C supports arguments -Wold-style-definition: YES 00:02:07.130 Compiler for C supports arguments -Wpointer-arith: YES 00:02:07.130 Compiler for C supports arguments -Wsign-compare: YES 00:02:07.130 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:07.130 Compiler for C supports arguments -Wundef: YES 00:02:07.130 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.130 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:07.130 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:07.130 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.130 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:07.130 Program objdump found: YES (/usr/bin/objdump) 00:02:07.130 Compiler for C supports arguments -mavx512f: YES 00:02:07.130 Checking if "AVX512 checking" compiles: YES 00:02:07.130 Fetching value of define "__SSE4_2__" : 1 00:02:07.130 Fetching value of define "__AES__" : 1 00:02:07.130 Fetching value of define "__AVX__" : 1 00:02:07.130 Fetching value of define "__AVX2__" : 1 00:02:07.130 Fetching value of define "__AVX512BW__" : 1 00:02:07.130 Fetching value of define "__AVX512CD__" : 1 00:02:07.130 Fetching value of define "__AVX512DQ__" : 1 00:02:07.130 Fetching value of define "__AVX512F__" : 1 00:02:07.130 Fetching value of define "__AVX512VL__" : 1 00:02:07.130 Fetching value of define "__PCLMUL__" : 1 00:02:07.130 Fetching value of define "__RDRND__" : 1 00:02:07.130 Fetching value of define "__RDSEED__" : 1 00:02:07.130 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:07.130 Fetching value of define "__znver1__" : (undefined) 00:02:07.130 Fetching value of define "__znver2__" : (undefined) 00:02:07.130 Fetching value of define "__znver3__" : (undefined) 00:02:07.130 Fetching value of define "__znver4__" : (undefined) 00:02:07.130 Library asan found: YES 00:02:07.130 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:07.130 Message: lib/log: Defining dependency "log" 00:02:07.130 Message: lib/kvargs: Defining dependency "kvargs" 00:02:07.130 Message: lib/telemetry: Defining dependency "telemetry" 00:02:07.130 Library rt found: YES 00:02:07.130 Checking for function "getentropy" : NO 00:02:07.130 Message: lib/eal: Defining dependency "eal" 00:02:07.130 Message: lib/ring: Defining dependency "ring" 00:02:07.130 Message: lib/rcu: Defining dependency "rcu" 00:02:07.130 Message: lib/mempool: Defining dependency "mempool" 00:02:07.130 Message: lib/mbuf: Defining dependency "mbuf" 00:02:07.130 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:07.130 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.130 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:07.130 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:07.130 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:07.130 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:07.130 Compiler for C supports arguments -mpclmul: YES 00:02:07.130 Compiler for C supports arguments -maes: YES 00:02:07.130 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.130 Compiler for C supports arguments -mavx512bw: YES 00:02:07.130 Compiler for C supports arguments -mavx512dq: YES 00:02:07.130 Compiler for C supports arguments -mavx512vl: YES 00:02:07.130 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:07.130 Compiler for C supports arguments -mavx2: YES 00:02:07.130 Compiler for C supports arguments -mavx: YES 00:02:07.130 Message: lib/net: Defining dependency "net" 00:02:07.130 Message: lib/meter: Defining dependency "meter" 00:02:07.130 Message: lib/ethdev: Defining dependency "ethdev" 00:02:07.130 Message: lib/pci: Defining dependency "pci" 00:02:07.130 Message: lib/cmdline: Defining dependency "cmdline" 00:02:07.130 Message: lib/hash: Defining dependency "hash" 00:02:07.130 Message: lib/timer: Defining dependency "timer" 00:02:07.130 Message: lib/compressdev: Defining dependency "compressdev" 00:02:07.130 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:07.130 Message: lib/dmadev: Defining dependency "dmadev" 00:02:07.130 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:07.130 Message: lib/power: Defining dependency "power" 00:02:07.130 Message: lib/reorder: Defining dependency "reorder" 00:02:07.130 Message: lib/security: Defining dependency "security" 00:02:07.130 Has header "linux/userfaultfd.h" : YES 00:02:07.130 Has header "linux/vduse.h" : YES 00:02:07.130 Message: lib/vhost: Defining dependency "vhost" 00:02:07.130 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:07.130 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:07.130 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:07.130 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:07.130 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:07.130 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:07.130 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:07.130 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:07.130 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:07.130 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:07.130 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:07.130 Configuring doxy-api-html.conf using configuration 00:02:07.131 Configuring doxy-api-man.conf using configuration 00:02:07.131 Program mandb found: YES (/usr/bin/mandb) 00:02:07.131 Program sphinx-build found: NO 00:02:07.131 Configuring rte_build_config.h using configuration 00:02:07.131 Message: 00:02:07.131 ================= 00:02:07.131 Applications Enabled 00:02:07.131 ================= 00:02:07.131 00:02:07.131 apps: 00:02:07.131 00:02:07.131 00:02:07.131 Message: 00:02:07.131 ================= 00:02:07.131 Libraries Enabled 00:02:07.131 ================= 00:02:07.131 00:02:07.131 libs: 00:02:07.131 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:07.131 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:07.131 cryptodev, dmadev, power, reorder, security, vhost, 00:02:07.131 00:02:07.131 Message: 00:02:07.131 =============== 00:02:07.131 Drivers Enabled 00:02:07.131 =============== 00:02:07.131 00:02:07.131 common: 00:02:07.131 00:02:07.131 bus: 00:02:07.131 pci, vdev, 00:02:07.131 mempool: 00:02:07.131 ring, 00:02:07.131 dma: 00:02:07.131 00:02:07.131 net: 00:02:07.131 00:02:07.131 crypto: 00:02:07.131 00:02:07.131 compress: 00:02:07.131 00:02:07.131 vdpa: 00:02:07.131 00:02:07.131 00:02:07.131 Message: 00:02:07.131 ================= 00:02:07.131 Content Skipped 00:02:07.131 ================= 00:02:07.131 00:02:07.131 apps: 00:02:07.131 dumpcap: explicitly disabled via build config 00:02:07.131 graph: explicitly disabled via build config 00:02:07.131 pdump: explicitly disabled via build config 00:02:07.131 proc-info: explicitly disabled via build config 00:02:07.131 test-acl: explicitly disabled via build config 00:02:07.131 test-bbdev: explicitly disabled via build config 00:02:07.131 test-cmdline: explicitly disabled via build config 00:02:07.131 test-compress-perf: explicitly disabled via build config 00:02:07.131 test-crypto-perf: explicitly disabled via build config 00:02:07.131 test-dma-perf: explicitly disabled via build config 00:02:07.131 test-eventdev: explicitly disabled via build config 00:02:07.131 test-fib: explicitly disabled via build config 00:02:07.131 test-flow-perf: explicitly disabled via build config 00:02:07.131 test-gpudev: explicitly disabled via build config 00:02:07.131 test-mldev: explicitly disabled via build config 00:02:07.131 test-pipeline: explicitly disabled via build config 00:02:07.131 test-pmd: explicitly disabled via build config 00:02:07.131 test-regex: explicitly disabled via build config 00:02:07.131 test-sad: explicitly disabled via build config 00:02:07.131 test-security-perf: explicitly disabled via build config 00:02:07.131 00:02:07.131 libs: 00:02:07.131 argparse: explicitly disabled via build config 00:02:07.131 metrics: explicitly disabled via build config 00:02:07.131 acl: explicitly disabled via build config 00:02:07.131 bbdev: explicitly disabled via build config 00:02:07.131 bitratestats: explicitly disabled via build config 00:02:07.131 bpf: explicitly disabled via build config 00:02:07.131 cfgfile: explicitly disabled via build config 00:02:07.131 distributor: explicitly disabled via build config 00:02:07.131 efd: explicitly disabled via build config 00:02:07.131 eventdev: explicitly disabled via build config 00:02:07.131 dispatcher: explicitly disabled via build config 00:02:07.131 gpudev: explicitly disabled via build config 00:02:07.131 gro: explicitly disabled via build config 00:02:07.131 gso: explicitly disabled via build config 00:02:07.131 ip_frag: explicitly disabled via build config 00:02:07.131 jobstats: explicitly disabled via build config 00:02:07.131 latencystats: explicitly disabled via build config 00:02:07.131 lpm: explicitly disabled via build config 00:02:07.131 member: explicitly disabled via build config 00:02:07.131 pcapng: explicitly disabled via build config 00:02:07.131 rawdev: explicitly disabled via build config 00:02:07.131 regexdev: explicitly disabled via build config 00:02:07.131 mldev: explicitly disabled via build config 00:02:07.131 rib: explicitly disabled via build config 00:02:07.131 sched: explicitly disabled via build config 00:02:07.131 stack: explicitly disabled via build config 00:02:07.131 ipsec: explicitly disabled via build config 00:02:07.131 pdcp: explicitly disabled via build config 00:02:07.131 fib: explicitly disabled via build config 00:02:07.131 port: explicitly disabled via build config 00:02:07.131 pdump: explicitly disabled via build config 00:02:07.131 table: explicitly disabled via build config 00:02:07.131 pipeline: explicitly disabled via build config 00:02:07.131 graph: explicitly disabled via build config 00:02:07.131 node: explicitly disabled via build config 00:02:07.131 00:02:07.131 drivers: 00:02:07.131 common/cpt: not in enabled drivers build config 00:02:07.131 common/dpaax: not in enabled drivers build config 00:02:07.131 common/iavf: not in enabled drivers build config 00:02:07.131 common/idpf: not in enabled drivers build config 00:02:07.131 common/ionic: not in enabled drivers build config 00:02:07.131 common/mvep: not in enabled drivers build config 00:02:07.131 common/octeontx: not in enabled drivers build config 00:02:07.131 bus/auxiliary: not in enabled drivers build config 00:02:07.131 bus/cdx: not in enabled drivers build config 00:02:07.131 bus/dpaa: not in enabled drivers build config 00:02:07.131 bus/fslmc: not in enabled drivers build config 00:02:07.131 bus/ifpga: not in enabled drivers build config 00:02:07.131 bus/platform: not in enabled drivers build config 00:02:07.131 bus/uacce: not in enabled drivers build config 00:02:07.131 bus/vmbus: not in enabled drivers build config 00:02:07.131 common/cnxk: not in enabled drivers build config 00:02:07.131 common/mlx5: not in enabled drivers build config 00:02:07.131 common/nfp: not in enabled drivers build config 00:02:07.131 common/nitrox: not in enabled drivers build config 00:02:07.131 common/qat: not in enabled drivers build config 00:02:07.131 common/sfc_efx: not in enabled drivers build config 00:02:07.131 mempool/bucket: not in enabled drivers build config 00:02:07.131 mempool/cnxk: not in enabled drivers build config 00:02:07.131 mempool/dpaa: not in enabled drivers build config 00:02:07.131 mempool/dpaa2: not in enabled drivers build config 00:02:07.131 mempool/octeontx: not in enabled drivers build config 00:02:07.131 mempool/stack: not in enabled drivers build config 00:02:07.131 dma/cnxk: not in enabled drivers build config 00:02:07.131 dma/dpaa: not in enabled drivers build config 00:02:07.131 dma/dpaa2: not in enabled drivers build config 00:02:07.131 dma/hisilicon: not in enabled drivers build config 00:02:07.131 dma/idxd: not in enabled drivers build config 00:02:07.131 dma/ioat: not in enabled drivers build config 00:02:07.131 dma/skeleton: not in enabled drivers build config 00:02:07.131 net/af_packet: not in enabled drivers build config 00:02:07.131 net/af_xdp: not in enabled drivers build config 00:02:07.131 net/ark: not in enabled drivers build config 00:02:07.131 net/atlantic: not in enabled drivers build config 00:02:07.131 net/avp: not in enabled drivers build config 00:02:07.131 net/axgbe: not in enabled drivers build config 00:02:07.131 net/bnx2x: not in enabled drivers build config 00:02:07.131 net/bnxt: not in enabled drivers build config 00:02:07.131 net/bonding: not in enabled drivers build config 00:02:07.131 net/cnxk: not in enabled drivers build config 00:02:07.131 net/cpfl: not in enabled drivers build config 00:02:07.131 net/cxgbe: not in enabled drivers build config 00:02:07.131 net/dpaa: not in enabled drivers build config 00:02:07.131 net/dpaa2: not in enabled drivers build config 00:02:07.131 net/e1000: not in enabled drivers build config 00:02:07.131 net/ena: not in enabled drivers build config 00:02:07.131 net/enetc: not in enabled drivers build config 00:02:07.131 net/enetfec: not in enabled drivers build config 00:02:07.131 net/enic: not in enabled drivers build config 00:02:07.131 net/failsafe: not in enabled drivers build config 00:02:07.131 net/fm10k: not in enabled drivers build config 00:02:07.131 net/gve: not in enabled drivers build config 00:02:07.131 net/hinic: not in enabled drivers build config 00:02:07.131 net/hns3: not in enabled drivers build config 00:02:07.131 net/i40e: not in enabled drivers build config 00:02:07.131 net/iavf: not in enabled drivers build config 00:02:07.131 net/ice: not in enabled drivers build config 00:02:07.131 net/idpf: not in enabled drivers build config 00:02:07.131 net/igc: not in enabled drivers build config 00:02:07.131 net/ionic: not in enabled drivers build config 00:02:07.131 net/ipn3ke: not in enabled drivers build config 00:02:07.131 net/ixgbe: not in enabled drivers build config 00:02:07.131 net/mana: not in enabled drivers build config 00:02:07.131 net/memif: not in enabled drivers build config 00:02:07.131 net/mlx4: not in enabled drivers build config 00:02:07.131 net/mlx5: not in enabled drivers build config 00:02:07.131 net/mvneta: not in enabled drivers build config 00:02:07.131 net/mvpp2: not in enabled drivers build config 00:02:07.131 net/netvsc: not in enabled drivers build config 00:02:07.131 net/nfb: not in enabled drivers build config 00:02:07.131 net/nfp: not in enabled drivers build config 00:02:07.131 net/ngbe: not in enabled drivers build config 00:02:07.131 net/null: not in enabled drivers build config 00:02:07.131 net/octeontx: not in enabled drivers build config 00:02:07.131 net/octeon_ep: not in enabled drivers build config 00:02:07.131 net/pcap: not in enabled drivers build config 00:02:07.131 net/pfe: not in enabled drivers build config 00:02:07.131 net/qede: not in enabled drivers build config 00:02:07.131 net/ring: not in enabled drivers build config 00:02:07.131 net/sfc: not in enabled drivers build config 00:02:07.131 net/softnic: not in enabled drivers build config 00:02:07.131 net/tap: not in enabled drivers build config 00:02:07.131 net/thunderx: not in enabled drivers build config 00:02:07.131 net/txgbe: not in enabled drivers build config 00:02:07.131 net/vdev_netvsc: not in enabled drivers build config 00:02:07.131 net/vhost: not in enabled drivers build config 00:02:07.131 net/virtio: not in enabled drivers build config 00:02:07.131 net/vmxnet3: not in enabled drivers build config 00:02:07.131 raw/*: missing internal dependency, "rawdev" 00:02:07.131 crypto/armv8: not in enabled drivers build config 00:02:07.132 crypto/bcmfs: not in enabled drivers build config 00:02:07.132 crypto/caam_jr: not in enabled drivers build config 00:02:07.132 crypto/ccp: not in enabled drivers build config 00:02:07.132 crypto/cnxk: not in enabled drivers build config 00:02:07.132 crypto/dpaa_sec: not in enabled drivers build config 00:02:07.132 crypto/dpaa2_sec: not in enabled drivers build config 00:02:07.132 crypto/ipsec_mb: not in enabled drivers build config 00:02:07.132 crypto/mlx5: not in enabled drivers build config 00:02:07.132 crypto/mvsam: not in enabled drivers build config 00:02:07.132 crypto/nitrox: not in enabled drivers build config 00:02:07.132 crypto/null: not in enabled drivers build config 00:02:07.132 crypto/octeontx: not in enabled drivers build config 00:02:07.132 crypto/openssl: not in enabled drivers build config 00:02:07.132 crypto/scheduler: not in enabled drivers build config 00:02:07.132 crypto/uadk: not in enabled drivers build config 00:02:07.132 crypto/virtio: not in enabled drivers build config 00:02:07.132 compress/isal: not in enabled drivers build config 00:02:07.132 compress/mlx5: not in enabled drivers build config 00:02:07.132 compress/nitrox: not in enabled drivers build config 00:02:07.132 compress/octeontx: not in enabled drivers build config 00:02:07.132 compress/zlib: not in enabled drivers build config 00:02:07.132 regex/*: missing internal dependency, "regexdev" 00:02:07.132 ml/*: missing internal dependency, "mldev" 00:02:07.132 vdpa/ifc: not in enabled drivers build config 00:02:07.132 vdpa/mlx5: not in enabled drivers build config 00:02:07.132 vdpa/nfp: not in enabled drivers build config 00:02:07.132 vdpa/sfc: not in enabled drivers build config 00:02:07.132 event/*: missing internal dependency, "eventdev" 00:02:07.132 baseband/*: missing internal dependency, "bbdev" 00:02:07.132 gpu/*: missing internal dependency, "gpudev" 00:02:07.132 00:02:07.132 00:02:07.699 Build targets in project: 85 00:02:07.699 00:02:07.699 DPDK 24.03.0 00:02:07.699 00:02:07.699 User defined options 00:02:07.699 buildtype : debug 00:02:07.699 default_library : shared 00:02:07.699 libdir : lib 00:02:07.699 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:07.699 b_sanitize : address 00:02:07.699 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:07.699 c_link_args : 00:02:07.699 cpu_instruction_set: native 00:02:07.699 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:07.699 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:07.699 enable_docs : false 00:02:07.699 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:07.699 enable_kmods : false 00:02:07.699 max_lcores : 128 00:02:07.699 tests : false 00:02:07.699 00:02:07.699 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.958 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:08.216 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:08.216 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:08.216 [3/268] Linking static target lib/librte_kvargs.a 00:02:08.216 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:08.216 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:08.216 [6/268] Linking static target lib/librte_log.a 00:02:08.781 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:08.781 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.781 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.782 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:08.782 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:08.782 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:08.782 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:08.782 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:08.782 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.782 [16/268] Linking static target lib/librte_telemetry.a 00:02:08.782 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.040 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.040 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.298 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.298 [21/268] Linking target lib/librte_log.so.24.1 00:02:09.298 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.298 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.298 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.298 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.298 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:09.298 [27/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:09.557 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:09.557 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:09.557 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:09.557 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.816 [32/268] Linking target lib/librte_telemetry.so.24.1 00:02:09.816 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.816 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:09.816 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.816 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.816 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.816 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.816 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.074 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:10.074 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.074 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.074 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.074 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.074 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.331 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.331 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.589 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.589 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:10.589 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.848 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.848 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.848 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.848 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.848 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.848 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.848 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.106 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.106 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.106 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.365 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.365 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.365 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.365 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.365 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.624 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.624 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.624 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.624 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.883 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.883 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.883 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.883 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.144 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.144 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.144 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.144 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.144 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.403 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.403 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.403 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.403 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.661 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.661 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.661 [85/268] Linking static target lib/librte_ring.a 00:02:12.918 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.918 [87/268] Linking static target lib/librte_eal.a 00:02:12.918 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.918 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:12.918 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.918 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.918 [92/268] Linking static target lib/librte_mempool.a 00:02:13.175 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.175 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.176 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.433 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.433 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.433 [98/268] Linking static target lib/librte_rcu.a 00:02:13.433 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.433 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.701 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.701 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.701 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.701 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.701 [105/268] Linking static target lib/librte_mbuf.a 00:02:13.701 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.962 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.962 [108/268] Linking static target lib/librte_net.a 00:02:13.962 [109/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.962 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:13.962 [111/268] Linking static target lib/librte_meter.a 00:02:14.221 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.221 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.221 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.221 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.221 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.480 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.480 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.740 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:14.740 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.740 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.000 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.260 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.260 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.521 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.521 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.521 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.521 [128/268] Linking static target lib/librte_pci.a 00:02:15.521 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.521 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.521 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:15.781 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:15.781 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.781 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:15.781 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:15.781 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:15.781 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.781 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:15.781 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:15.781 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.042 [141/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.042 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.042 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.042 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.302 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.302 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.302 [147/268] Linking static target lib/librte_cmdline.a 00:02:16.302 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.562 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.562 [150/268] Linking static target lib/librte_timer.a 00:02:16.562 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.562 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:16.562 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.822 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:16.822 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:17.081 [156/268] Linking static target lib/librte_ethdev.a 00:02:17.081 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:17.081 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.081 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:17.081 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.081 [161/268] Linking static target lib/librte_compressdev.a 00:02:17.081 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.081 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.081 [164/268] Linking static target lib/librte_hash.a 00:02:17.340 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:17.600 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:17.600 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:17.600 [168/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:17.858 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:17.858 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:17.858 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.858 [172/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:17.858 [173/268] Linking static target lib/librte_dmadev.a 00:02:18.117 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.117 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:18.376 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:18.376 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:18.376 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.376 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:18.376 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:18.376 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:18.635 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:18.635 [183/268] Linking static target lib/librte_cryptodev.a 00:02:18.635 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.894 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:18.894 [186/268] Linking static target lib/librte_power.a 00:02:18.894 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.894 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:19.154 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:19.154 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:19.154 [191/268] Linking static target lib/librte_reorder.a 00:02:19.154 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:19.154 [193/268] Linking static target lib/librte_security.a 00:02:19.724 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:19.983 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.983 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.983 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:20.242 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.242 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.501 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:20.502 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:20.502 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:20.760 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:21.020 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:21.020 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:21.020 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.345 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:21.345 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:21.345 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.345 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:21.345 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:21.345 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:21.345 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.604 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.604 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:21.604 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:21.604 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.604 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.604 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:21.864 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.864 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:21.864 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:22.124 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.124 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.124 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.124 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.124 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:23.060 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.999 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.999 [230/268] Linking target lib/librte_eal.so.24.1 00:02:23.999 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:23.999 [232/268] Linking target lib/librte_pci.so.24.1 00:02:23.999 [233/268] Linking target lib/librte_meter.so.24.1 00:02:23.999 [234/268] Linking target lib/librte_ring.so.24.1 00:02:23.999 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:24.258 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.258 [237/268] Linking target lib/librte_timer.so.24.1 00:02:24.258 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.258 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.258 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.258 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.258 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.258 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:24.258 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:24.517 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.517 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.517 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.517 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:24.517 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:24.776 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:24.776 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:24.776 [252/268] Linking target lib/librte_net.so.24.1 00:02:24.776 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:24.776 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:25.035 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.035 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.035 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:25.035 [258/268] Linking target lib/librte_security.so.24.1 00:02:25.035 [259/268] Linking target lib/librte_hash.so.24.1 00:02:25.035 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:25.295 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.555 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:25.555 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.814 [264/268] Linking target lib/librte_power.so.24.1 00:02:28.379 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.379 [266/268] Linking static target lib/librte_vhost.a 00:02:30.285 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.285 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:30.285 INFO: autodetecting backend as ninja 00:02:30.285 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:52.273 CC lib/ut/ut.o 00:02:52.274 CC lib/log/log.o 00:02:52.274 CC lib/log/log_flags.o 00:02:52.274 CC lib/log/log_deprecated.o 00:02:52.274 CC lib/ut_mock/mock.o 00:02:52.274 LIB libspdk_ut.a 00:02:52.274 LIB libspdk_ut_mock.a 00:02:52.274 SO libspdk_ut.so.2.0 00:02:52.274 SO libspdk_ut_mock.so.6.0 00:02:52.274 LIB libspdk_log.a 00:02:52.274 SYMLINK libspdk_ut.so 00:02:52.274 SO libspdk_log.so.7.1 00:02:52.274 SYMLINK libspdk_ut_mock.so 00:02:52.274 SYMLINK libspdk_log.so 00:02:52.274 CC lib/dma/dma.o 00:02:52.274 CXX lib/trace_parser/trace.o 00:02:52.274 CC lib/ioat/ioat.o 00:02:52.274 CC lib/util/base64.o 00:02:52.274 CC lib/util/cpuset.o 00:02:52.274 CC lib/util/bit_array.o 00:02:52.274 CC lib/util/crc16.o 00:02:52.274 CC lib/util/crc32c.o 00:02:52.274 CC lib/util/crc32.o 00:02:52.274 CC lib/vfio_user/host/vfio_user_pci.o 00:02:52.274 CC lib/util/crc32_ieee.o 00:02:52.274 CC lib/util/crc64.o 00:02:52.274 CC lib/util/dif.o 00:02:52.274 LIB libspdk_dma.a 00:02:52.274 SO libspdk_dma.so.5.0 00:02:52.274 CC lib/vfio_user/host/vfio_user.o 00:02:52.274 CC lib/util/fd.o 00:02:52.274 CC lib/util/fd_group.o 00:02:52.274 CC lib/util/file.o 00:02:52.274 SYMLINK libspdk_dma.so 00:02:52.274 CC lib/util/hexlify.o 00:02:52.274 CC lib/util/iov.o 00:02:52.274 LIB libspdk_ioat.a 00:02:52.274 SO libspdk_ioat.so.7.0 00:02:52.274 CC lib/util/math.o 00:02:52.274 CC lib/util/net.o 00:02:52.274 SYMLINK libspdk_ioat.so 00:02:52.274 CC lib/util/pipe.o 00:02:52.274 CC lib/util/strerror_tls.o 00:02:52.274 CC lib/util/string.o 00:02:52.274 LIB libspdk_vfio_user.a 00:02:52.274 CC lib/util/uuid.o 00:02:52.274 SO libspdk_vfio_user.so.5.0 00:02:52.274 CC lib/util/xor.o 00:02:52.274 CC lib/util/zipf.o 00:02:52.274 CC lib/util/md5.o 00:02:52.274 SYMLINK libspdk_vfio_user.so 00:02:52.274 LIB libspdk_util.a 00:02:52.274 LIB libspdk_trace_parser.a 00:02:52.274 SO libspdk_util.so.10.1 00:02:52.274 SO libspdk_trace_parser.so.6.0 00:02:52.274 SYMLINK libspdk_util.so 00:02:52.274 SYMLINK libspdk_trace_parser.so 00:02:52.274 CC lib/json/json_parse.o 00:02:52.274 CC lib/json/json_util.o 00:02:52.274 CC lib/json/json_write.o 00:02:52.274 CC lib/idxd/idxd.o 00:02:52.274 CC lib/idxd/idxd_user.o 00:02:52.274 CC lib/idxd/idxd_kernel.o 00:02:52.274 CC lib/env_dpdk/env.o 00:02:52.274 CC lib/conf/conf.o 00:02:52.274 CC lib/vmd/vmd.o 00:02:52.274 CC lib/rdma_utils/rdma_utils.o 00:02:52.274 CC lib/env_dpdk/memory.o 00:02:52.274 CC lib/env_dpdk/pci.o 00:02:52.274 LIB libspdk_conf.a 00:02:52.274 CC lib/env_dpdk/init.o 00:02:52.274 CC lib/vmd/led.o 00:02:52.274 SO libspdk_conf.so.6.0 00:02:52.274 LIB libspdk_json.a 00:02:52.274 LIB libspdk_rdma_utils.a 00:02:52.274 SYMLINK libspdk_conf.so 00:02:52.274 CC lib/env_dpdk/threads.o 00:02:52.274 SO libspdk_json.so.6.0 00:02:52.274 SO libspdk_rdma_utils.so.1.0 00:02:52.274 SYMLINK libspdk_json.so 00:02:52.274 SYMLINK libspdk_rdma_utils.so 00:02:52.274 CC lib/env_dpdk/pci_ioat.o 00:02:52.274 CC lib/env_dpdk/pci_virtio.o 00:02:52.274 CC lib/env_dpdk/pci_vmd.o 00:02:52.274 CC lib/jsonrpc/jsonrpc_server.o 00:02:52.274 CC lib/env_dpdk/pci_idxd.o 00:02:52.274 CC lib/rdma_provider/common.o 00:02:52.274 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:52.274 CC lib/env_dpdk/pci_event.o 00:02:52.274 CC lib/env_dpdk/sigbus_handler.o 00:02:52.274 CC lib/env_dpdk/pci_dpdk.o 00:02:52.274 LIB libspdk_idxd.a 00:02:52.274 SO libspdk_idxd.so.12.1 00:02:52.274 LIB libspdk_vmd.a 00:02:52.274 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:52.274 SO libspdk_vmd.so.6.0 00:02:52.274 CC lib/jsonrpc/jsonrpc_client.o 00:02:52.274 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:52.274 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:52.274 SYMLINK libspdk_idxd.so 00:02:52.274 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:52.274 LIB libspdk_rdma_provider.a 00:02:52.274 SYMLINK libspdk_vmd.so 00:02:52.274 SO libspdk_rdma_provider.so.7.0 00:02:52.274 SYMLINK libspdk_rdma_provider.so 00:02:52.274 LIB libspdk_jsonrpc.a 00:02:52.274 SO libspdk_jsonrpc.so.6.0 00:02:52.274 SYMLINK libspdk_jsonrpc.so 00:02:52.274 CC lib/rpc/rpc.o 00:02:52.534 LIB libspdk_env_dpdk.a 00:02:52.534 LIB libspdk_rpc.a 00:02:52.534 SO libspdk_env_dpdk.so.15.1 00:02:52.534 SO libspdk_rpc.so.6.0 00:02:52.793 SYMLINK libspdk_rpc.so 00:02:52.793 SYMLINK libspdk_env_dpdk.so 00:02:53.052 CC lib/trace/trace.o 00:02:53.052 CC lib/trace/trace_rpc.o 00:02:53.052 CC lib/trace/trace_flags.o 00:02:53.052 CC lib/notify/notify.o 00:02:53.052 CC lib/keyring/keyring.o 00:02:53.052 CC lib/keyring/keyring_rpc.o 00:02:53.052 CC lib/notify/notify_rpc.o 00:02:53.312 LIB libspdk_notify.a 00:02:53.312 SO libspdk_notify.so.6.0 00:02:53.312 LIB libspdk_keyring.a 00:02:53.312 SYMLINK libspdk_notify.so 00:02:53.312 LIB libspdk_trace.a 00:02:53.312 SO libspdk_keyring.so.2.0 00:02:53.312 SO libspdk_trace.so.11.0 00:02:53.312 SYMLINK libspdk_keyring.so 00:02:53.570 SYMLINK libspdk_trace.so 00:02:53.828 CC lib/thread/thread.o 00:02:53.828 CC lib/thread/iobuf.o 00:02:53.828 CC lib/sock/sock_rpc.o 00:02:53.828 CC lib/sock/sock.o 00:02:54.396 LIB libspdk_sock.a 00:02:54.396 SO libspdk_sock.so.10.0 00:02:54.396 SYMLINK libspdk_sock.so 00:02:54.655 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:54.655 CC lib/nvme/nvme_ctrlr.o 00:02:54.655 CC lib/nvme/nvme_fabric.o 00:02:54.655 CC lib/nvme/nvme_ns_cmd.o 00:02:54.655 CC lib/nvme/nvme_ns.o 00:02:54.655 CC lib/nvme/nvme_pcie_common.o 00:02:54.655 CC lib/nvme/nvme_pcie.o 00:02:54.655 CC lib/nvme/nvme_qpair.o 00:02:54.655 CC lib/nvme/nvme.o 00:02:55.224 CC lib/nvme/nvme_quirks.o 00:02:55.224 CC lib/nvme/nvme_transport.o 00:02:55.484 CC lib/nvme/nvme_discovery.o 00:02:55.484 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:55.484 LIB libspdk_thread.a 00:02:55.484 SO libspdk_thread.so.11.0 00:02:55.484 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:55.484 CC lib/nvme/nvme_tcp.o 00:02:55.484 SYMLINK libspdk_thread.so 00:02:55.484 CC lib/nvme/nvme_opal.o 00:02:55.743 CC lib/accel/accel.o 00:02:55.743 CC lib/nvme/nvme_io_msg.o 00:02:55.743 CC lib/nvme/nvme_poll_group.o 00:02:56.003 CC lib/nvme/nvme_zns.o 00:02:56.003 CC lib/blob/blobstore.o 00:02:56.003 CC lib/blob/request.o 00:02:56.262 CC lib/blob/zeroes.o 00:02:56.262 CC lib/init/json_config.o 00:02:56.262 CC lib/init/subsystem.o 00:02:56.262 CC lib/init/subsystem_rpc.o 00:02:56.522 CC lib/accel/accel_rpc.o 00:02:56.522 CC lib/accel/accel_sw.o 00:02:56.522 CC lib/init/rpc.o 00:02:56.522 CC lib/blob/blob_bs_dev.o 00:02:56.522 LIB libspdk_init.a 00:02:56.522 CC lib/nvme/nvme_stubs.o 00:02:56.522 CC lib/virtio/virtio.o 00:02:56.522 SO libspdk_init.so.6.0 00:02:56.781 SYMLINK libspdk_init.so 00:02:56.781 CC lib/nvme/nvme_auth.o 00:02:56.781 CC lib/nvme/nvme_cuse.o 00:02:56.781 CC lib/fsdev/fsdev.o 00:02:56.781 CC lib/event/app.o 00:02:56.781 LIB libspdk_accel.a 00:02:57.039 CC lib/virtio/virtio_vhost_user.o 00:02:57.039 SO libspdk_accel.so.16.0 00:02:57.039 CC lib/nvme/nvme_rdma.o 00:02:57.039 SYMLINK libspdk_accel.so 00:02:57.039 CC lib/fsdev/fsdev_io.o 00:02:57.039 CC lib/fsdev/fsdev_rpc.o 00:02:57.039 CC lib/bdev/bdev.o 00:02:57.297 CC lib/bdev/bdev_rpc.o 00:02:57.297 CC lib/virtio/virtio_vfio_user.o 00:02:57.297 CC lib/event/reactor.o 00:02:57.297 CC lib/virtio/virtio_pci.o 00:02:57.297 LIB libspdk_fsdev.a 00:02:57.556 SO libspdk_fsdev.so.2.0 00:02:57.556 CC lib/bdev/bdev_zone.o 00:02:57.556 SYMLINK libspdk_fsdev.so 00:02:57.556 CC lib/bdev/part.o 00:02:57.556 CC lib/bdev/scsi_nvme.o 00:02:57.556 CC lib/event/log_rpc.o 00:02:57.556 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:57.556 CC lib/event/app_rpc.o 00:02:57.556 LIB libspdk_virtio.a 00:02:57.816 SO libspdk_virtio.so.7.0 00:02:57.816 CC lib/event/scheduler_static.o 00:02:57.816 SYMLINK libspdk_virtio.so 00:02:57.816 LIB libspdk_event.a 00:02:58.077 SO libspdk_event.so.14.0 00:02:58.077 SYMLINK libspdk_event.so 00:02:58.336 LIB libspdk_fuse_dispatcher.a 00:02:58.336 SO libspdk_fuse_dispatcher.so.1.0 00:02:58.336 SYMLINK libspdk_fuse_dispatcher.so 00:02:58.337 LIB libspdk_nvme.a 00:02:58.596 SO libspdk_nvme.so.15.0 00:02:58.856 SYMLINK libspdk_nvme.so 00:02:59.423 LIB libspdk_blob.a 00:02:59.683 SO libspdk_blob.so.12.0 00:02:59.683 SYMLINK libspdk_blob.so 00:02:59.943 CC lib/lvol/lvol.o 00:02:59.943 CC lib/blobfs/blobfs.o 00:03:00.203 CC lib/blobfs/tree.o 00:03:00.203 LIB libspdk_bdev.a 00:03:00.203 SO libspdk_bdev.so.17.0 00:03:00.462 SYMLINK libspdk_bdev.so 00:03:00.462 CC lib/nvmf/ctrlr.o 00:03:00.462 CC lib/nvmf/ctrlr_discovery.o 00:03:00.462 CC lib/nvmf/subsystem.o 00:03:00.462 CC lib/nvmf/ctrlr_bdev.o 00:03:00.462 CC lib/ftl/ftl_core.o 00:03:00.462 CC lib/nbd/nbd.o 00:03:00.462 CC lib/scsi/dev.o 00:03:00.462 CC lib/ublk/ublk.o 00:03:00.722 CC lib/scsi/lun.o 00:03:00.982 CC lib/ftl/ftl_init.o 00:03:00.982 CC lib/nbd/nbd_rpc.o 00:03:01.242 LIB libspdk_blobfs.a 00:03:01.242 CC lib/scsi/port.o 00:03:01.242 SO libspdk_blobfs.so.11.0 00:03:01.242 CC lib/scsi/scsi.o 00:03:01.242 LIB libspdk_lvol.a 00:03:01.242 SO libspdk_lvol.so.11.0 00:03:01.242 CC lib/ftl/ftl_layout.o 00:03:01.242 SYMLINK libspdk_blobfs.so 00:03:01.242 CC lib/ftl/ftl_debug.o 00:03:01.242 LIB libspdk_nbd.a 00:03:01.242 SO libspdk_nbd.so.7.0 00:03:01.242 SYMLINK libspdk_lvol.so 00:03:01.242 CC lib/scsi/scsi_bdev.o 00:03:01.242 CC lib/ftl/ftl_io.o 00:03:01.242 CC lib/ftl/ftl_sb.o 00:03:01.242 SYMLINK libspdk_nbd.so 00:03:01.242 CC lib/ublk/ublk_rpc.o 00:03:01.242 CC lib/ftl/ftl_l2p.o 00:03:01.501 CC lib/nvmf/nvmf.o 00:03:01.501 CC lib/ftl/ftl_l2p_flat.o 00:03:01.501 LIB libspdk_ublk.a 00:03:01.501 CC lib/ftl/ftl_nv_cache.o 00:03:01.501 SO libspdk_ublk.so.3.0 00:03:01.501 CC lib/nvmf/nvmf_rpc.o 00:03:01.501 CC lib/nvmf/transport.o 00:03:01.501 CC lib/ftl/ftl_band.o 00:03:01.501 SYMLINK libspdk_ublk.so 00:03:01.502 CC lib/scsi/scsi_pr.o 00:03:01.760 CC lib/ftl/ftl_band_ops.o 00:03:01.760 CC lib/scsi/scsi_rpc.o 00:03:02.019 CC lib/nvmf/tcp.o 00:03:02.019 CC lib/scsi/task.o 00:03:02.019 CC lib/ftl/ftl_writer.o 00:03:02.019 CC lib/ftl/ftl_rq.o 00:03:02.278 CC lib/nvmf/stubs.o 00:03:02.278 CC lib/ftl/ftl_reloc.o 00:03:02.278 LIB libspdk_scsi.a 00:03:02.278 CC lib/nvmf/mdns_server.o 00:03:02.278 SO libspdk_scsi.so.9.0 00:03:02.537 CC lib/nvmf/rdma.o 00:03:02.537 CC lib/nvmf/auth.o 00:03:02.537 SYMLINK libspdk_scsi.so 00:03:02.537 CC lib/ftl/ftl_l2p_cache.o 00:03:02.796 CC lib/ftl/ftl_p2l.o 00:03:02.796 CC lib/ftl/ftl_p2l_log.o 00:03:02.796 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.796 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.796 CC lib/iscsi/conn.o 00:03:02.796 CC lib/vhost/vhost.o 00:03:03.055 CC lib/vhost/vhost_rpc.o 00:03:03.055 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:03.055 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:03.055 CC lib/iscsi/init_grp.o 00:03:03.055 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:03.314 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:03.314 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:03.314 CC lib/iscsi/iscsi.o 00:03:03.314 CC lib/iscsi/param.o 00:03:03.673 CC lib/vhost/vhost_scsi.o 00:03:03.673 CC lib/vhost/vhost_blk.o 00:03:03.673 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:03.673 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:03.673 CC lib/vhost/rte_vhost_user.o 00:03:03.673 CC lib/iscsi/portal_grp.o 00:03:03.673 CC lib/iscsi/tgt_node.o 00:03:03.932 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:03.932 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:03.932 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:03.932 CC lib/iscsi/iscsi_subsystem.o 00:03:03.932 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:04.191 CC lib/ftl/utils/ftl_conf.o 00:03:04.191 CC lib/iscsi/iscsi_rpc.o 00:03:04.191 CC lib/iscsi/task.o 00:03:04.191 CC lib/ftl/utils/ftl_md.o 00:03:04.449 CC lib/ftl/utils/ftl_mempool.o 00:03:04.449 CC lib/ftl/utils/ftl_bitmap.o 00:03:04.449 CC lib/ftl/utils/ftl_property.o 00:03:04.449 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:04.449 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:04.708 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:04.708 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:04.708 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:04.708 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:04.708 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:04.708 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:04.708 LIB libspdk_vhost.a 00:03:04.708 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:04.968 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:04.968 SO libspdk_vhost.so.8.0 00:03:04.968 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:04.968 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:04.968 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:04.968 CC lib/ftl/base/ftl_base_dev.o 00:03:04.968 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.968 SYMLINK libspdk_vhost.so 00:03:04.968 CC lib/ftl/ftl_trace.o 00:03:04.968 LIB libspdk_iscsi.a 00:03:05.228 SO libspdk_iscsi.so.8.0 00:03:05.228 LIB libspdk_nvmf.a 00:03:05.228 SYMLINK libspdk_iscsi.so 00:03:05.228 LIB libspdk_ftl.a 00:03:05.228 SO libspdk_nvmf.so.20.0 00:03:05.486 SO libspdk_ftl.so.9.0 00:03:05.486 SYMLINK libspdk_nvmf.so 00:03:05.746 SYMLINK libspdk_ftl.so 00:03:06.315 CC module/env_dpdk/env_dpdk_rpc.o 00:03:06.315 CC module/accel/error/accel_error.o 00:03:06.315 CC module/accel/iaa/accel_iaa.o 00:03:06.315 CC module/sock/posix/posix.o 00:03:06.315 CC module/accel/dsa/accel_dsa.o 00:03:06.315 CC module/fsdev/aio/fsdev_aio.o 00:03:06.315 CC module/accel/ioat/accel_ioat.o 00:03:06.315 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:06.315 CC module/keyring/file/keyring.o 00:03:06.315 CC module/blob/bdev/blob_bdev.o 00:03:06.315 LIB libspdk_env_dpdk_rpc.a 00:03:06.315 SO libspdk_env_dpdk_rpc.so.6.0 00:03:06.315 SYMLINK libspdk_env_dpdk_rpc.so 00:03:06.574 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:06.574 CC module/keyring/file/keyring_rpc.o 00:03:06.574 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.574 CC module/accel/error/accel_error_rpc.o 00:03:06.574 LIB libspdk_scheduler_dynamic.a 00:03:06.574 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.574 SO libspdk_scheduler_dynamic.so.4.0 00:03:06.574 CC module/fsdev/aio/linux_aio_mgr.o 00:03:06.574 SYMLINK libspdk_scheduler_dynamic.so 00:03:06.574 LIB libspdk_keyring_file.a 00:03:06.574 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.574 LIB libspdk_accel_ioat.a 00:03:06.574 LIB libspdk_blob_bdev.a 00:03:06.574 SO libspdk_keyring_file.so.2.0 00:03:06.574 LIB libspdk_accel_error.a 00:03:06.574 LIB libspdk_accel_iaa.a 00:03:06.574 SO libspdk_accel_ioat.so.6.0 00:03:06.574 SO libspdk_blob_bdev.so.12.0 00:03:06.833 SO libspdk_accel_error.so.2.0 00:03:06.833 SO libspdk_accel_iaa.so.3.0 00:03:06.833 SYMLINK libspdk_keyring_file.so 00:03:06.833 SYMLINK libspdk_blob_bdev.so 00:03:06.833 SYMLINK libspdk_accel_ioat.so 00:03:06.833 LIB libspdk_accel_dsa.a 00:03:06.833 SYMLINK libspdk_accel_iaa.so 00:03:06.833 SYMLINK libspdk_accel_error.so 00:03:06.833 SO libspdk_accel_dsa.so.5.0 00:03:06.833 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:06.833 SYMLINK libspdk_accel_dsa.so 00:03:06.833 CC module/keyring/linux/keyring.o 00:03:07.092 CC module/scheduler/gscheduler/gscheduler.o 00:03:07.092 LIB libspdk_scheduler_dpdk_governor.a 00:03:07.092 CC module/bdev/error/vbdev_error.o 00:03:07.092 CC module/bdev/gpt/gpt.o 00:03:07.092 CC module/blobfs/bdev/blobfs_bdev.o 00:03:07.092 CC module/bdev/delay/vbdev_delay.o 00:03:07.092 CC module/bdev/lvol/vbdev_lvol.o 00:03:07.092 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:07.092 CC module/keyring/linux/keyring_rpc.o 00:03:07.092 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:07.092 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:07.092 LIB libspdk_scheduler_gscheduler.a 00:03:07.092 SO libspdk_scheduler_gscheduler.so.4.0 00:03:07.093 LIB libspdk_fsdev_aio.a 00:03:07.093 SO libspdk_fsdev_aio.so.1.0 00:03:07.352 SYMLINK libspdk_scheduler_gscheduler.so 00:03:07.352 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:07.352 CC module/bdev/error/vbdev_error_rpc.o 00:03:07.352 LIB libspdk_keyring_linux.a 00:03:07.352 LIB libspdk_sock_posix.a 00:03:07.352 CC module/bdev/gpt/vbdev_gpt.o 00:03:07.352 SO libspdk_keyring_linux.so.1.0 00:03:07.352 SO libspdk_sock_posix.so.6.0 00:03:07.352 SYMLINK libspdk_fsdev_aio.so 00:03:07.352 SYMLINK libspdk_keyring_linux.so 00:03:07.352 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:07.352 SYMLINK libspdk_sock_posix.so 00:03:07.352 LIB libspdk_bdev_error.a 00:03:07.352 LIB libspdk_blobfs_bdev.a 00:03:07.352 SO libspdk_bdev_error.so.6.0 00:03:07.352 SO libspdk_blobfs_bdev.so.6.0 00:03:07.352 CC module/bdev/malloc/bdev_malloc.o 00:03:07.612 CC module/bdev/null/bdev_null.o 00:03:07.612 SYMLINK libspdk_bdev_error.so 00:03:07.612 SYMLINK libspdk_blobfs_bdev.so 00:03:07.612 LIB libspdk_bdev_gpt.a 00:03:07.612 CC module/bdev/null/bdev_null_rpc.o 00:03:07.612 LIB libspdk_bdev_delay.a 00:03:07.612 CC module/bdev/nvme/bdev_nvme.o 00:03:07.612 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.612 SO libspdk_bdev_gpt.so.6.0 00:03:07.612 SO libspdk_bdev_delay.so.6.0 00:03:07.612 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.612 SYMLINK libspdk_bdev_gpt.so 00:03:07.612 SYMLINK libspdk_bdev_delay.so 00:03:07.612 CC module/bdev/nvme/nvme_rpc.o 00:03:07.612 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.612 LIB libspdk_bdev_lvol.a 00:03:07.612 CC module/bdev/raid/bdev_raid.o 00:03:07.612 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.612 SO libspdk_bdev_lvol.so.6.0 00:03:07.872 SYMLINK libspdk_bdev_lvol.so 00:03:07.872 CC module/bdev/raid/bdev_raid_sb.o 00:03:07.872 CC module/bdev/raid/raid0.o 00:03:07.872 LIB libspdk_bdev_null.a 00:03:07.872 SO libspdk_bdev_null.so.6.0 00:03:07.872 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:07.872 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.872 SYMLINK libspdk_bdev_null.so 00:03:07.872 CC module/bdev/raid/raid1.o 00:03:08.132 LIB libspdk_bdev_passthru.a 00:03:08.132 CC module/bdev/split/vbdev_split.o 00:03:08.132 LIB libspdk_bdev_malloc.a 00:03:08.132 SO libspdk_bdev_passthru.so.6.0 00:03:08.132 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:08.132 SO libspdk_bdev_malloc.so.6.0 00:03:08.132 SYMLINK libspdk_bdev_passthru.so 00:03:08.132 SYMLINK libspdk_bdev_malloc.so 00:03:08.132 CC module/bdev/raid/concat.o 00:03:08.392 CC module/bdev/aio/bdev_aio.o 00:03:08.392 CC module/bdev/nvme/vbdev_opal.o 00:03:08.392 CC module/bdev/aio/bdev_aio_rpc.o 00:03:08.392 CC module/bdev/ftl/bdev_ftl.o 00:03:08.392 CC module/bdev/split/vbdev_split_rpc.o 00:03:08.392 CC module/bdev/iscsi/bdev_iscsi.o 00:03:08.392 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:08.652 LIB libspdk_bdev_split.a 00:03:08.652 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:08.652 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:08.652 SO libspdk_bdev_split.so.6.0 00:03:08.652 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:08.652 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:08.652 LIB libspdk_bdev_aio.a 00:03:08.652 SYMLINK libspdk_bdev_split.so 00:03:08.652 CC module/bdev/raid/raid5f.o 00:03:08.652 LIB libspdk_bdev_ftl.a 00:03:08.652 SO libspdk_bdev_aio.so.6.0 00:03:08.652 SO libspdk_bdev_ftl.so.6.0 00:03:08.652 LIB libspdk_bdev_zone_block.a 00:03:08.911 SO libspdk_bdev_zone_block.so.6.0 00:03:08.911 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:08.911 SYMLINK libspdk_bdev_ftl.so 00:03:08.911 LIB libspdk_bdev_iscsi.a 00:03:08.911 SYMLINK libspdk_bdev_aio.so 00:03:08.911 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:08.911 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:08.911 SYMLINK libspdk_bdev_zone_block.so 00:03:08.911 SO libspdk_bdev_iscsi.so.6.0 00:03:08.911 SYMLINK libspdk_bdev_iscsi.so 00:03:09.172 LIB libspdk_bdev_virtio.a 00:03:09.432 SO libspdk_bdev_virtio.so.6.0 00:03:09.432 LIB libspdk_bdev_raid.a 00:03:09.432 SO libspdk_bdev_raid.so.6.0 00:03:09.432 SYMLINK libspdk_bdev_virtio.so 00:03:09.692 SYMLINK libspdk_bdev_raid.so 00:03:10.632 LIB libspdk_bdev_nvme.a 00:03:10.632 SO libspdk_bdev_nvme.so.7.1 00:03:10.891 SYMLINK libspdk_bdev_nvme.so 00:03:11.460 CC module/event/subsystems/vmd/vmd.o 00:03:11.460 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.460 CC module/event/subsystems/fsdev/fsdev.o 00:03:11.460 CC module/event/subsystems/keyring/keyring.o 00:03:11.460 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.460 CC module/event/subsystems/sock/sock.o 00:03:11.460 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.460 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.460 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.460 LIB libspdk_event_vhost_blk.a 00:03:11.460 LIB libspdk_event_fsdev.a 00:03:11.460 LIB libspdk_event_vmd.a 00:03:11.720 LIB libspdk_event_keyring.a 00:03:11.720 LIB libspdk_event_sock.a 00:03:11.720 LIB libspdk_event_scheduler.a 00:03:11.720 SO libspdk_event_vhost_blk.so.3.0 00:03:11.720 SO libspdk_event_fsdev.so.1.0 00:03:11.720 SO libspdk_event_vmd.so.6.0 00:03:11.720 SO libspdk_event_keyring.so.1.0 00:03:11.720 SO libspdk_event_sock.so.5.0 00:03:11.720 SO libspdk_event_scheduler.so.4.0 00:03:11.720 LIB libspdk_event_iobuf.a 00:03:11.720 SYMLINK libspdk_event_vhost_blk.so 00:03:11.720 SO libspdk_event_iobuf.so.3.0 00:03:11.720 SYMLINK libspdk_event_fsdev.so 00:03:11.720 SYMLINK libspdk_event_keyring.so 00:03:11.720 SYMLINK libspdk_event_vmd.so 00:03:11.720 SYMLINK libspdk_event_sock.so 00:03:11.720 SYMLINK libspdk_event_scheduler.so 00:03:11.720 SYMLINK libspdk_event_iobuf.so 00:03:11.980 CC module/event/subsystems/accel/accel.o 00:03:12.240 LIB libspdk_event_accel.a 00:03:12.240 SO libspdk_event_accel.so.6.0 00:03:12.500 SYMLINK libspdk_event_accel.so 00:03:12.759 CC module/event/subsystems/bdev/bdev.o 00:03:13.019 LIB libspdk_event_bdev.a 00:03:13.019 SO libspdk_event_bdev.so.6.0 00:03:13.019 SYMLINK libspdk_event_bdev.so 00:03:13.290 CC module/event/subsystems/scsi/scsi.o 00:03:13.290 CC module/event/subsystems/nbd/nbd.o 00:03:13.290 CC module/event/subsystems/ublk/ublk.o 00:03:13.290 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:13.290 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:13.577 LIB libspdk_event_nbd.a 00:03:13.577 LIB libspdk_event_ublk.a 00:03:13.577 LIB libspdk_event_scsi.a 00:03:13.577 SO libspdk_event_ublk.so.3.0 00:03:13.577 SO libspdk_event_nbd.so.6.0 00:03:13.577 SO libspdk_event_scsi.so.6.0 00:03:13.577 SYMLINK libspdk_event_nbd.so 00:03:13.577 SYMLINK libspdk_event_ublk.so 00:03:13.577 SYMLINK libspdk_event_scsi.so 00:03:13.577 LIB libspdk_event_nvmf.a 00:03:13.836 SO libspdk_event_nvmf.so.6.0 00:03:13.836 SYMLINK libspdk_event_nvmf.so 00:03:14.095 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:14.095 CC module/event/subsystems/iscsi/iscsi.o 00:03:14.095 LIB libspdk_event_vhost_scsi.a 00:03:14.095 LIB libspdk_event_iscsi.a 00:03:14.354 SO libspdk_event_vhost_scsi.so.3.0 00:03:14.354 SO libspdk_event_iscsi.so.6.0 00:03:14.354 SYMLINK libspdk_event_vhost_scsi.so 00:03:14.354 SYMLINK libspdk_event_iscsi.so 00:03:14.613 SO libspdk.so.6.0 00:03:14.613 SYMLINK libspdk.so 00:03:14.871 CC app/trace_record/trace_record.o 00:03:14.871 CC app/spdk_nvme_identify/identify.o 00:03:14.871 CC app/spdk_nvme_perf/perf.o 00:03:14.871 CXX app/trace/trace.o 00:03:14.871 CC app/spdk_lspci/spdk_lspci.o 00:03:14.871 CC app/iscsi_tgt/iscsi_tgt.o 00:03:14.871 CC app/nvmf_tgt/nvmf_main.o 00:03:14.871 CC app/spdk_tgt/spdk_tgt.o 00:03:14.871 CC examples/util/zipf/zipf.o 00:03:14.871 CC test/thread/poller_perf/poller_perf.o 00:03:14.871 LINK spdk_lspci 00:03:15.130 LINK iscsi_tgt 00:03:15.130 LINK nvmf_tgt 00:03:15.130 LINK zipf 00:03:15.130 LINK poller_perf 00:03:15.130 LINK spdk_trace_record 00:03:15.130 LINK spdk_tgt 00:03:15.387 LINK spdk_trace 00:03:15.387 CC app/spdk_nvme_discover/discovery_aer.o 00:03:15.387 CC app/spdk_top/spdk_top.o 00:03:15.387 CC examples/ioat/perf/perf.o 00:03:15.387 CC app/spdk_dd/spdk_dd.o 00:03:15.646 LINK spdk_nvme_discover 00:03:15.646 CC test/dma/test_dma/test_dma.o 00:03:15.646 CC test/app/bdev_svc/bdev_svc.o 00:03:15.646 TEST_HEADER include/spdk/accel.h 00:03:15.646 TEST_HEADER include/spdk/accel_module.h 00:03:15.646 TEST_HEADER include/spdk/assert.h 00:03:15.646 TEST_HEADER include/spdk/barrier.h 00:03:15.646 TEST_HEADER include/spdk/base64.h 00:03:15.646 TEST_HEADER include/spdk/bdev.h 00:03:15.646 TEST_HEADER include/spdk/bdev_module.h 00:03:15.646 TEST_HEADER include/spdk/bdev_zone.h 00:03:15.646 TEST_HEADER include/spdk/bit_array.h 00:03:15.646 TEST_HEADER include/spdk/bit_pool.h 00:03:15.646 TEST_HEADER include/spdk/blob_bdev.h 00:03:15.646 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:15.646 CC app/fio/nvme/fio_plugin.o 00:03:15.646 TEST_HEADER include/spdk/blobfs.h 00:03:15.646 TEST_HEADER include/spdk/blob.h 00:03:15.646 TEST_HEADER include/spdk/conf.h 00:03:15.646 TEST_HEADER include/spdk/config.h 00:03:15.646 TEST_HEADER include/spdk/cpuset.h 00:03:15.646 TEST_HEADER include/spdk/crc16.h 00:03:15.646 TEST_HEADER include/spdk/crc32.h 00:03:15.646 TEST_HEADER include/spdk/crc64.h 00:03:15.646 TEST_HEADER include/spdk/dif.h 00:03:15.646 TEST_HEADER include/spdk/dma.h 00:03:15.646 TEST_HEADER include/spdk/endian.h 00:03:15.646 TEST_HEADER include/spdk/env_dpdk.h 00:03:15.646 TEST_HEADER include/spdk/env.h 00:03:15.646 TEST_HEADER include/spdk/event.h 00:03:15.646 TEST_HEADER include/spdk/fd_group.h 00:03:15.646 TEST_HEADER include/spdk/fd.h 00:03:15.646 TEST_HEADER include/spdk/file.h 00:03:15.646 TEST_HEADER include/spdk/fsdev.h 00:03:15.646 TEST_HEADER include/spdk/fsdev_module.h 00:03:15.646 TEST_HEADER include/spdk/ftl.h 00:03:15.646 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:15.646 TEST_HEADER include/spdk/gpt_spec.h 00:03:15.646 TEST_HEADER include/spdk/hexlify.h 00:03:15.646 TEST_HEADER include/spdk/histogram_data.h 00:03:15.646 TEST_HEADER include/spdk/idxd.h 00:03:15.646 TEST_HEADER include/spdk/idxd_spec.h 00:03:15.646 TEST_HEADER include/spdk/init.h 00:03:15.646 TEST_HEADER include/spdk/ioat.h 00:03:15.646 TEST_HEADER include/spdk/ioat_spec.h 00:03:15.646 TEST_HEADER include/spdk/iscsi_spec.h 00:03:15.646 TEST_HEADER include/spdk/json.h 00:03:15.646 TEST_HEADER include/spdk/jsonrpc.h 00:03:15.646 TEST_HEADER include/spdk/keyring.h 00:03:15.646 TEST_HEADER include/spdk/keyring_module.h 00:03:15.646 TEST_HEADER include/spdk/likely.h 00:03:15.646 TEST_HEADER include/spdk/log.h 00:03:15.646 TEST_HEADER include/spdk/lvol.h 00:03:15.646 TEST_HEADER include/spdk/md5.h 00:03:15.646 TEST_HEADER include/spdk/memory.h 00:03:15.646 TEST_HEADER include/spdk/mmio.h 00:03:15.646 TEST_HEADER include/spdk/nbd.h 00:03:15.646 TEST_HEADER include/spdk/net.h 00:03:15.646 TEST_HEADER include/spdk/notify.h 00:03:15.646 TEST_HEADER include/spdk/nvme.h 00:03:15.646 TEST_HEADER include/spdk/nvme_intel.h 00:03:15.646 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:15.646 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:15.646 TEST_HEADER include/spdk/nvme_spec.h 00:03:15.646 TEST_HEADER include/spdk/nvme_zns.h 00:03:15.646 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:15.646 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:15.646 TEST_HEADER include/spdk/nvmf.h 00:03:15.646 TEST_HEADER include/spdk/nvmf_spec.h 00:03:15.646 TEST_HEADER include/spdk/nvmf_transport.h 00:03:15.646 TEST_HEADER include/spdk/opal.h 00:03:15.646 TEST_HEADER include/spdk/opal_spec.h 00:03:15.646 TEST_HEADER include/spdk/pci_ids.h 00:03:15.646 TEST_HEADER include/spdk/pipe.h 00:03:15.646 TEST_HEADER include/spdk/queue.h 00:03:15.646 TEST_HEADER include/spdk/reduce.h 00:03:15.646 TEST_HEADER include/spdk/rpc.h 00:03:15.646 TEST_HEADER include/spdk/scheduler.h 00:03:15.646 TEST_HEADER include/spdk/scsi.h 00:03:15.646 TEST_HEADER include/spdk/scsi_spec.h 00:03:15.646 TEST_HEADER include/spdk/sock.h 00:03:15.646 TEST_HEADER include/spdk/stdinc.h 00:03:15.646 TEST_HEADER include/spdk/string.h 00:03:15.646 TEST_HEADER include/spdk/thread.h 00:03:15.646 TEST_HEADER include/spdk/trace.h 00:03:15.646 TEST_HEADER include/spdk/trace_parser.h 00:03:15.646 TEST_HEADER include/spdk/tree.h 00:03:15.646 TEST_HEADER include/spdk/ublk.h 00:03:15.646 TEST_HEADER include/spdk/util.h 00:03:15.646 TEST_HEADER include/spdk/uuid.h 00:03:15.646 TEST_HEADER include/spdk/version.h 00:03:15.646 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:15.646 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:15.646 TEST_HEADER include/spdk/vhost.h 00:03:15.646 TEST_HEADER include/spdk/vmd.h 00:03:15.646 TEST_HEADER include/spdk/xor.h 00:03:15.646 LINK ioat_perf 00:03:15.646 TEST_HEADER include/spdk/zipf.h 00:03:15.646 CXX test/cpp_headers/accel.o 00:03:15.905 LINK bdev_svc 00:03:15.905 LINK spdk_nvme_perf 00:03:15.905 CXX test/cpp_headers/accel_module.o 00:03:15.905 LINK spdk_nvme_identify 00:03:15.905 LINK spdk_dd 00:03:15.905 CC test/env/mem_callbacks/mem_callbacks.o 00:03:15.905 CC examples/ioat/verify/verify.o 00:03:16.163 CXX test/cpp_headers/assert.o 00:03:16.163 LINK test_dma 00:03:16.163 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:16.163 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:16.163 LINK verify 00:03:16.163 CC test/env/vtophys/vtophys.o 00:03:16.163 LINK spdk_nvme 00:03:16.163 CXX test/cpp_headers/barrier.o 00:03:16.423 CC test/event/event_perf/event_perf.o 00:03:16.423 LINK vtophys 00:03:16.423 CXX test/cpp_headers/base64.o 00:03:16.423 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:16.423 LINK event_perf 00:03:16.423 CC app/fio/bdev/fio_plugin.o 00:03:16.682 LINK spdk_top 00:03:16.682 CC examples/vmd/lsvmd/lsvmd.o 00:03:16.682 CXX test/cpp_headers/bdev.o 00:03:16.682 LINK env_dpdk_post_init 00:03:16.682 LINK nvme_fuzz 00:03:16.682 CC examples/vmd/led/led.o 00:03:16.682 CC test/event/reactor/reactor.o 00:03:16.682 LINK lsvmd 00:03:16.682 LINK mem_callbacks 00:03:16.942 CC test/event/reactor_perf/reactor_perf.o 00:03:16.942 CXX test/cpp_headers/bdev_module.o 00:03:16.942 LINK led 00:03:16.943 LINK reactor 00:03:16.943 CC test/event/app_repeat/app_repeat.o 00:03:16.943 LINK reactor_perf 00:03:16.943 CC test/env/memory/memory_ut.o 00:03:16.943 CC test/event/scheduler/scheduler.o 00:03:16.943 CXX test/cpp_headers/bdev_zone.o 00:03:16.943 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:17.202 LINK spdk_bdev 00:03:17.202 LINK app_repeat 00:03:17.202 CC test/rpc_client/rpc_client_test.o 00:03:17.202 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:17.202 CXX test/cpp_headers/bit_array.o 00:03:17.202 LINK scheduler 00:03:17.202 CC examples/idxd/perf/perf.o 00:03:17.461 CXX test/cpp_headers/bit_pool.o 00:03:17.461 CC app/vhost/vhost.o 00:03:17.461 CC test/accel/dif/dif.o 00:03:17.461 LINK rpc_client_test 00:03:17.461 CXX test/cpp_headers/blob_bdev.o 00:03:17.461 CC test/app/histogram_perf/histogram_perf.o 00:03:17.461 LINK vhost 00:03:17.721 CC test/app/jsoncat/jsoncat.o 00:03:17.721 LINK histogram_perf 00:03:17.721 CC test/env/pci/pci_ut.o 00:03:17.721 LINK idxd_perf 00:03:17.721 CXX test/cpp_headers/blobfs_bdev.o 00:03:17.721 LINK vhost_fuzz 00:03:17.721 CXX test/cpp_headers/blobfs.o 00:03:17.721 LINK jsoncat 00:03:18.039 CXX test/cpp_headers/blob.o 00:03:18.039 CC test/app/stub/stub.o 00:03:18.039 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.039 CXX test/cpp_headers/conf.o 00:03:18.039 CC examples/thread/thread/thread_ex.o 00:03:18.039 CXX test/cpp_headers/config.o 00:03:18.039 LINK stub 00:03:18.039 LINK interrupt_tgt 00:03:18.039 LINK pci_ut 00:03:18.039 CXX test/cpp_headers/cpuset.o 00:03:18.039 CC test/blobfs/mkfs/mkfs.o 00:03:18.298 LINK dif 00:03:18.298 CXX test/cpp_headers/crc16.o 00:03:18.298 CXX test/cpp_headers/crc32.o 00:03:18.298 CXX test/cpp_headers/crc64.o 00:03:18.298 LINK memory_ut 00:03:18.298 LINK thread 00:03:18.298 CC test/lvol/esnap/esnap.o 00:03:18.298 LINK mkfs 00:03:18.298 LINK iscsi_fuzz 00:03:18.298 CXX test/cpp_headers/dif.o 00:03:18.558 CXX test/cpp_headers/dma.o 00:03:18.558 CC test/nvme/aer/aer.o 00:03:18.558 CC examples/sock/hello_world/hello_sock.o 00:03:18.558 CC test/nvme/reset/reset.o 00:03:18.558 CC test/nvme/sgl/sgl.o 00:03:18.558 CXX test/cpp_headers/endian.o 00:03:18.558 CC test/nvme/e2edp/nvme_dp.o 00:03:18.558 CC test/bdev/bdevio/bdevio.o 00:03:18.558 CC test/nvme/overhead/overhead.o 00:03:18.558 CC test/nvme/err_injection/err_injection.o 00:03:18.818 CXX test/cpp_headers/env_dpdk.o 00:03:18.818 LINK aer 00:03:18.818 LINK hello_sock 00:03:18.818 LINK reset 00:03:18.818 LINK err_injection 00:03:18.818 CXX test/cpp_headers/env.o 00:03:18.818 LINK nvme_dp 00:03:18.818 LINK sgl 00:03:19.077 LINK overhead 00:03:19.077 CXX test/cpp_headers/event.o 00:03:19.077 LINK bdevio 00:03:19.077 CC test/nvme/reserve/reserve.o 00:03:19.077 CC test/nvme/startup/startup.o 00:03:19.077 CC test/nvme/simple_copy/simple_copy.o 00:03:19.077 CC test/nvme/connect_stress/connect_stress.o 00:03:19.077 CC examples/accel/perf/accel_perf.o 00:03:19.077 CC test/nvme/boot_partition/boot_partition.o 00:03:19.336 CC test/nvme/compliance/nvme_compliance.o 00:03:19.337 CXX test/cpp_headers/fd_group.o 00:03:19.337 LINK startup 00:03:19.337 LINK reserve 00:03:19.337 LINK connect_stress 00:03:19.337 CC test/nvme/fused_ordering/fused_ordering.o 00:03:19.337 LINK boot_partition 00:03:19.337 LINK simple_copy 00:03:19.337 CXX test/cpp_headers/fd.o 00:03:19.597 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:19.597 CXX test/cpp_headers/file.o 00:03:19.597 CC test/nvme/fdp/fdp.o 00:03:19.597 LINK fused_ordering 00:03:19.597 LINK nvme_compliance 00:03:19.597 CC test/nvme/cuse/cuse.o 00:03:19.857 LINK doorbell_aers 00:03:19.857 CXX test/cpp_headers/fsdev.o 00:03:19.857 CC examples/blob/hello_world/hello_blob.o 00:03:19.857 LINK accel_perf 00:03:19.857 CC examples/nvme/hello_world/hello_world.o 00:03:19.857 CXX test/cpp_headers/fsdev_module.o 00:03:19.857 CXX test/cpp_headers/ftl.o 00:03:20.117 CXX test/cpp_headers/fuse_dispatcher.o 00:03:20.117 LINK fdp 00:03:20.117 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:20.117 LINK hello_blob 00:03:20.117 CC examples/nvme/reconnect/reconnect.o 00:03:20.117 LINK hello_world 00:03:20.117 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.117 CXX test/cpp_headers/gpt_spec.o 00:03:20.117 CXX test/cpp_headers/hexlify.o 00:03:20.378 CXX test/cpp_headers/histogram_data.o 00:03:20.378 LINK hello_fsdev 00:03:20.378 CC examples/bdev/hello_world/hello_bdev.o 00:03:20.378 CC examples/blob/cli/blobcli.o 00:03:20.378 CC examples/nvme/arbitration/arbitration.o 00:03:20.378 CXX test/cpp_headers/idxd.o 00:03:20.378 CC examples/nvme/hotplug/hotplug.o 00:03:20.378 LINK reconnect 00:03:20.641 CXX test/cpp_headers/idxd_spec.o 00:03:20.641 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.641 LINK hello_bdev 00:03:20.641 CXX test/cpp_headers/init.o 00:03:20.641 LINK nvme_manage 00:03:20.641 LINK hotplug 00:03:20.900 CXX test/cpp_headers/ioat.o 00:03:20.900 LINK cmb_copy 00:03:20.900 LINK arbitration 00:03:20.900 CXX test/cpp_headers/ioat_spec.o 00:03:20.900 CC examples/bdev/bdevperf/bdevperf.o 00:03:20.900 CC examples/nvme/abort/abort.o 00:03:20.900 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.900 LINK blobcli 00:03:20.900 CXX test/cpp_headers/iscsi_spec.o 00:03:21.158 CXX test/cpp_headers/json.o 00:03:21.159 CXX test/cpp_headers/jsonrpc.o 00:03:21.159 CXX test/cpp_headers/keyring.o 00:03:21.159 LINK pmr_persistence 00:03:21.159 CXX test/cpp_headers/keyring_module.o 00:03:21.159 CXX test/cpp_headers/likely.o 00:03:21.159 CXX test/cpp_headers/log.o 00:03:21.159 LINK cuse 00:03:21.159 CXX test/cpp_headers/lvol.o 00:03:21.159 CXX test/cpp_headers/md5.o 00:03:21.417 CXX test/cpp_headers/memory.o 00:03:21.418 LINK abort 00:03:21.418 CXX test/cpp_headers/mmio.o 00:03:21.418 CXX test/cpp_headers/nbd.o 00:03:21.418 CXX test/cpp_headers/net.o 00:03:21.418 CXX test/cpp_headers/notify.o 00:03:21.418 CXX test/cpp_headers/nvme.o 00:03:21.418 CXX test/cpp_headers/nvme_intel.o 00:03:21.418 CXX test/cpp_headers/nvme_ocssd.o 00:03:21.418 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:21.418 CXX test/cpp_headers/nvme_spec.o 00:03:21.418 CXX test/cpp_headers/nvme_zns.o 00:03:21.677 CXX test/cpp_headers/nvmf_cmd.o 00:03:21.677 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:21.677 CXX test/cpp_headers/nvmf.o 00:03:21.677 CXX test/cpp_headers/nvmf_spec.o 00:03:21.677 CXX test/cpp_headers/nvmf_transport.o 00:03:21.677 CXX test/cpp_headers/opal.o 00:03:21.677 CXX test/cpp_headers/opal_spec.o 00:03:21.677 CXX test/cpp_headers/pci_ids.o 00:03:21.677 CXX test/cpp_headers/pipe.o 00:03:21.677 CXX test/cpp_headers/queue.o 00:03:21.677 CXX test/cpp_headers/reduce.o 00:03:21.937 CXX test/cpp_headers/rpc.o 00:03:21.937 CXX test/cpp_headers/scheduler.o 00:03:21.937 CXX test/cpp_headers/scsi.o 00:03:21.937 CXX test/cpp_headers/scsi_spec.o 00:03:21.937 CXX test/cpp_headers/sock.o 00:03:21.937 CXX test/cpp_headers/stdinc.o 00:03:21.937 LINK bdevperf 00:03:21.937 CXX test/cpp_headers/string.o 00:03:21.937 CXX test/cpp_headers/thread.o 00:03:21.937 CXX test/cpp_headers/trace.o 00:03:21.937 CXX test/cpp_headers/trace_parser.o 00:03:21.937 CXX test/cpp_headers/tree.o 00:03:21.937 CXX test/cpp_headers/ublk.o 00:03:21.937 CXX test/cpp_headers/util.o 00:03:21.937 CXX test/cpp_headers/uuid.o 00:03:21.937 CXX test/cpp_headers/version.o 00:03:21.937 CXX test/cpp_headers/vfio_user_pci.o 00:03:22.198 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.198 CXX test/cpp_headers/vhost.o 00:03:22.198 CXX test/cpp_headers/vmd.o 00:03:22.198 CXX test/cpp_headers/xor.o 00:03:22.198 CXX test/cpp_headers/zipf.o 00:03:22.198 CC examples/nvmf/nvmf/nvmf.o 00:03:22.772 LINK nvmf 00:03:24.683 LINK esnap 00:03:24.942 00:03:24.942 real 1m28.423s 00:03:24.942 user 8m2.858s 00:03:24.942 sys 1m34.323s 00:03:24.942 23:37:36 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:24.942 23:37:36 make -- common/autotest_common.sh@10 -- $ set +x 00:03:24.942 ************************************ 00:03:24.942 END TEST make 00:03:24.942 ************************************ 00:03:24.942 23:37:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:24.942 23:37:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:24.942 23:37:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:24.942 23:37:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.942 23:37:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:24.942 23:37:36 -- pm/common@44 -- $ pid=5467 00:03:24.942 23:37:36 -- pm/common@50 -- $ kill -TERM 5467 00:03:24.942 23:37:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.942 23:37:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:24.942 23:37:36 -- pm/common@44 -- $ pid=5469 00:03:24.942 23:37:36 -- pm/common@50 -- $ kill -TERM 5469 00:03:24.942 23:37:36 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:24.942 23:37:36 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:25.201 23:37:36 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:25.201 23:37:36 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:25.201 23:37:36 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:25.201 23:37:36 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:25.201 23:37:36 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.201 23:37:36 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.201 23:37:36 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.201 23:37:36 -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.201 23:37:36 -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.201 23:37:36 -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.201 23:37:36 -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.201 23:37:36 -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.201 23:37:36 -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.201 23:37:36 -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.201 23:37:36 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.201 23:37:36 -- scripts/common.sh@344 -- # case "$op" in 00:03:25.201 23:37:36 -- scripts/common.sh@345 -- # : 1 00:03:25.201 23:37:36 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.201 23:37:36 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.201 23:37:36 -- scripts/common.sh@365 -- # decimal 1 00:03:25.201 23:37:36 -- scripts/common.sh@353 -- # local d=1 00:03:25.201 23:37:36 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.201 23:37:36 -- scripts/common.sh@355 -- # echo 1 00:03:25.201 23:37:36 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.201 23:37:36 -- scripts/common.sh@366 -- # decimal 2 00:03:25.201 23:37:36 -- scripts/common.sh@353 -- # local d=2 00:03:25.201 23:37:36 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.201 23:37:36 -- scripts/common.sh@355 -- # echo 2 00:03:25.201 23:37:36 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.201 23:37:36 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.201 23:37:36 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.201 23:37:36 -- scripts/common.sh@368 -- # return 0 00:03:25.201 23:37:36 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.201 23:37:36 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:25.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.201 --rc genhtml_branch_coverage=1 00:03:25.201 --rc genhtml_function_coverage=1 00:03:25.201 --rc genhtml_legend=1 00:03:25.201 --rc geninfo_all_blocks=1 00:03:25.201 --rc geninfo_unexecuted_blocks=1 00:03:25.201 00:03:25.201 ' 00:03:25.201 23:37:36 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:25.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.201 --rc genhtml_branch_coverage=1 00:03:25.201 --rc genhtml_function_coverage=1 00:03:25.201 --rc genhtml_legend=1 00:03:25.201 --rc geninfo_all_blocks=1 00:03:25.201 --rc geninfo_unexecuted_blocks=1 00:03:25.201 00:03:25.201 ' 00:03:25.201 23:37:36 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:25.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.201 --rc genhtml_branch_coverage=1 00:03:25.201 --rc genhtml_function_coverage=1 00:03:25.201 --rc genhtml_legend=1 00:03:25.201 --rc geninfo_all_blocks=1 00:03:25.201 --rc geninfo_unexecuted_blocks=1 00:03:25.201 00:03:25.201 ' 00:03:25.201 23:37:36 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:25.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.201 --rc genhtml_branch_coverage=1 00:03:25.201 --rc genhtml_function_coverage=1 00:03:25.201 --rc genhtml_legend=1 00:03:25.201 --rc geninfo_all_blocks=1 00:03:25.201 --rc geninfo_unexecuted_blocks=1 00:03:25.201 00:03:25.201 ' 00:03:25.201 23:37:36 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:25.201 23:37:36 -- nvmf/common.sh@7 -- # uname -s 00:03:25.201 23:37:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.201 23:37:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.201 23:37:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.201 23:37:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.201 23:37:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.201 23:37:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.201 23:37:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.201 23:37:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.201 23:37:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.201 23:37:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.201 23:37:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ab26a3d-4419-430f-b16d-7ba8ab10a33a 00:03:25.201 23:37:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=9ab26a3d-4419-430f-b16d-7ba8ab10a33a 00:03:25.201 23:37:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.201 23:37:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.201 23:37:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:25.201 23:37:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:25.201 23:37:36 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:25.201 23:37:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:25.201 23:37:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.201 23:37:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.201 23:37:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.201 23:37:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.201 23:37:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.201 23:37:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.201 23:37:36 -- paths/export.sh@5 -- # export PATH 00:03:25.201 23:37:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.201 23:37:36 -- nvmf/common.sh@51 -- # : 0 00:03:25.201 23:37:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:25.201 23:37:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:25.201 23:37:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:25.201 23:37:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.201 23:37:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.201 23:37:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:25.201 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:25.201 23:37:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:25.201 23:37:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:25.201 23:37:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:25.201 23:37:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.201 23:37:36 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.201 23:37:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.201 23:37:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.201 23:37:36 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.201 23:37:36 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.201 23:37:36 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.201 23:37:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.201 23:37:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.201 23:37:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.201 23:37:36 -- spdk/autotest.sh@48 -- # udevadm_pid=54477 00:03:25.201 23:37:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.201 23:37:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.201 23:37:36 -- pm/common@17 -- # local monitor 00:03:25.201 23:37:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.201 23:37:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.201 23:37:36 -- pm/common@25 -- # sleep 1 00:03:25.201 23:37:36 -- pm/common@21 -- # date +%s 00:03:25.201 23:37:36 -- pm/common@21 -- # date +%s 00:03:25.201 23:37:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733528256 00:03:25.201 23:37:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733528256 00:03:25.201 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733528256_collect-cpu-load.pm.log 00:03:25.201 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733528256_collect-vmstat.pm.log 00:03:26.582 23:37:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.582 23:37:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.582 23:37:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:26.582 23:37:37 -- common/autotest_common.sh@10 -- # set +x 00:03:26.582 23:37:37 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.582 23:37:37 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:26.582 23:37:37 -- common/autotest_common.sh@10 -- # set +x 00:03:26.582 23:37:37 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:26.582 23:37:37 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:26.582 23:37:37 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:26.582 23:37:37 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:26.582 23:37:37 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:26.582 23:37:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:26.582 23:37:37 -- common/autotest_common.sh@1457 -- # uname 00:03:26.582 23:37:37 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:26.582 23:37:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:26.582 23:37:37 -- common/autotest_common.sh@1477 -- # uname 00:03:26.582 23:37:37 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:26.582 23:37:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:26.582 23:37:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:26.582 lcov: LCOV version 1.15 00:03:26.582 23:37:37 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:41.516 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:41.516 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:56.400 23:38:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:56.400 23:38:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.400 23:38:06 -- common/autotest_common.sh@10 -- # set +x 00:03:56.400 23:38:06 -- spdk/autotest.sh@78 -- # rm -f 00:03:56.400 23:38:06 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.400 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:56.400 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:56.400 23:38:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:56.400 23:38:07 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:56.400 23:38:07 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:56.400 23:38:07 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:56.400 23:38:07 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:56.400 23:38:07 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:56.400 23:38:07 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:56.400 23:38:07 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:56.400 23:38:07 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:56.400 23:38:07 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:56.400 23:38:07 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:56.400 23:38:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.400 23:38:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:56.400 23:38:07 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:56.400 23:38:07 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:56.400 23:38:07 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:56.400 23:38:07 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:56.400 23:38:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:56.400 23:38:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:56.400 23:38:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:56.400 23:38:07 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:56.400 23:38:07 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:03:56.400 23:38:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:56.400 23:38:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:56.400 23:38:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:56.400 23:38:07 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:56.400 23:38:07 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:03:56.400 23:38:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:56.400 23:38:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:56.400 23:38:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:56.400 23:38:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:56.400 23:38:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.400 23:38:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.400 23:38:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:56.400 23:38:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:56.400 23:38:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:56.400 No valid GPT data, bailing 00:03:56.400 23:38:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.400 23:38:07 -- scripts/common.sh@394 -- # pt= 00:03:56.400 23:38:07 -- scripts/common.sh@395 -- # return 1 00:03:56.400 23:38:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:56.400 1+0 records in 00:03:56.400 1+0 records out 00:03:56.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00651719 s, 161 MB/s 00:03:56.400 23:38:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.400 23:38:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.400 23:38:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:56.400 23:38:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:56.400 23:38:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:56.400 No valid GPT data, bailing 00:03:56.400 23:38:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:56.400 23:38:07 -- scripts/common.sh@394 -- # pt= 00:03:56.400 23:38:07 -- scripts/common.sh@395 -- # return 1 00:03:56.400 23:38:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:56.400 1+0 records in 00:03:56.400 1+0 records out 00:03:56.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445855 s, 235 MB/s 00:03:56.400 23:38:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.400 23:38:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.400 23:38:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:56.400 23:38:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:56.400 23:38:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:56.400 No valid GPT data, bailing 00:03:56.400 23:38:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:56.400 23:38:07 -- scripts/common.sh@394 -- # pt= 00:03:56.400 23:38:07 -- scripts/common.sh@395 -- # return 1 00:03:56.400 23:38:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:56.400 1+0 records in 00:03:56.400 1+0 records out 00:03:56.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434126 s, 242 MB/s 00:03:56.400 23:38:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.400 23:38:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.400 23:38:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:56.400 23:38:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:56.400 23:38:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:56.400 No valid GPT data, bailing 00:03:56.400 23:38:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:56.400 23:38:07 -- scripts/common.sh@394 -- # pt= 00:03:56.400 23:38:07 -- scripts/common.sh@395 -- # return 1 00:03:56.400 23:38:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:56.400 1+0 records in 00:03:56.400 1+0 records out 00:03:56.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611453 s, 171 MB/s 00:03:56.400 23:38:07 -- spdk/autotest.sh@105 -- # sync 00:03:56.400 23:38:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:56.400 23:38:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:56.400 23:38:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:58.933 23:38:10 -- spdk/autotest.sh@111 -- # uname -s 00:03:58.933 23:38:10 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:58.933 23:38:10 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:58.933 23:38:10 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:59.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.871 Hugepages 00:03:59.871 node hugesize free / total 00:03:59.871 node0 1048576kB 0 / 0 00:03:59.871 node0 2048kB 0 / 0 00:03:59.871 00:03:59.871 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:59.871 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:59.871 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:00.131 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:00.131 23:38:11 -- spdk/autotest.sh@117 -- # uname -s 00:04:00.131 23:38:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:00.131 23:38:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:00.131 23:38:11 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.957 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.957 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.957 23:38:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:01.895 23:38:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:01.895 23:38:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:01.895 23:38:13 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:01.895 23:38:13 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:01.895 23:38:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.895 23:38:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.895 23:38:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.895 23:38:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:01.895 23:38:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:02.154 23:38:13 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:02.154 23:38:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:02.154 23:38:13 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.693 Waiting for block devices as requested 00:04:02.693 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.693 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.693 23:38:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.693 23:38:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:02.693 23:38:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:02.693 23:38:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.693 23:38:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.693 23:38:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:02.693 23:38:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.693 23:38:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:02.693 23:38:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:02.693 23:38:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:02.693 23:38:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.693 23:38:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:02.693 23:38:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.693 23:38:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:02.693 23:38:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.693 23:38:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.693 23:38:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.693 23:38:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:02.693 23:38:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.693 23:38:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.693 23:38:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.693 23:38:14 -- common/autotest_common.sh@1543 -- # continue 00:04:02.693 23:38:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.693 23:38:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:02.693 23:38:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.693 23:38:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:02.952 23:38:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.952 23:38:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:02.952 23:38:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.952 23:38:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:02.952 23:38:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:02.952 23:38:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:02.953 23:38:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.953 23:38:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.953 23:38:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:02.953 23:38:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:02.953 23:38:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.953 23:38:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.953 23:38:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:02.953 23:38:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.953 23:38:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.953 23:38:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.953 23:38:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.953 23:38:14 -- common/autotest_common.sh@1543 -- # continue 00:04:02.953 23:38:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:02.953 23:38:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.953 23:38:14 -- common/autotest_common.sh@10 -- # set +x 00:04:02.953 23:38:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:02.953 23:38:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.953 23:38:14 -- common/autotest_common.sh@10 -- # set +x 00:04:02.953 23:38:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.781 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.781 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.781 23:38:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:03.781 23:38:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.781 23:38:15 -- common/autotest_common.sh@10 -- # set +x 00:04:03.781 23:38:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:03.781 23:38:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:03.781 23:38:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.781 23:38:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:03.781 23:38:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:03.781 23:38:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:03.781 23:38:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:03.781 23:38:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:03.781 23:38:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:03.781 23:38:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:03.781 23:38:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.781 23:38:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.781 23:38:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:04.040 23:38:15 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:04.040 23:38:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:04.040 23:38:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:04.040 23:38:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:04.040 23:38:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:04.040 23:38:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.041 23:38:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:04.041 23:38:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:04.041 23:38:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:04.041 23:38:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.041 23:38:15 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:04.041 23:38:15 -- common/autotest_common.sh@1572 -- # return 0 00:04:04.041 23:38:15 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:04.041 23:38:15 -- common/autotest_common.sh@1580 -- # return 0 00:04:04.041 23:38:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:04.041 23:38:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:04.041 23:38:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:04.041 23:38:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:04.041 23:38:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:04.041 23:38:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.041 23:38:15 -- common/autotest_common.sh@10 -- # set +x 00:04:04.041 23:38:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:04.041 23:38:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.041 23:38:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.041 23:38:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.041 23:38:15 -- common/autotest_common.sh@10 -- # set +x 00:04:04.041 ************************************ 00:04:04.041 START TEST env 00:04:04.041 ************************************ 00:04:04.041 23:38:15 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.041 * Looking for test storage... 00:04:04.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:04.041 23:38:15 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:04.041 23:38:15 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:04.041 23:38:15 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:04.300 23:38:15 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:04.300 23:38:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.300 23:38:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.300 23:38:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.300 23:38:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.300 23:38:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.300 23:38:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.300 23:38:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.300 23:38:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.300 23:38:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.300 23:38:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.300 23:38:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.300 23:38:15 env -- scripts/common.sh@344 -- # case "$op" in 00:04:04.300 23:38:15 env -- scripts/common.sh@345 -- # : 1 00:04:04.300 23:38:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.300 23:38:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.300 23:38:15 env -- scripts/common.sh@365 -- # decimal 1 00:04:04.300 23:38:15 env -- scripts/common.sh@353 -- # local d=1 00:04:04.300 23:38:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.300 23:38:15 env -- scripts/common.sh@355 -- # echo 1 00:04:04.300 23:38:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.300 23:38:15 env -- scripts/common.sh@366 -- # decimal 2 00:04:04.300 23:38:15 env -- scripts/common.sh@353 -- # local d=2 00:04:04.300 23:38:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.300 23:38:15 env -- scripts/common.sh@355 -- # echo 2 00:04:04.300 23:38:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.300 23:38:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.300 23:38:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.300 23:38:15 env -- scripts/common.sh@368 -- # return 0 00:04:04.300 23:38:15 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.300 23:38:15 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:04.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.300 --rc genhtml_branch_coverage=1 00:04:04.300 --rc genhtml_function_coverage=1 00:04:04.300 --rc genhtml_legend=1 00:04:04.300 --rc geninfo_all_blocks=1 00:04:04.300 --rc geninfo_unexecuted_blocks=1 00:04:04.300 00:04:04.300 ' 00:04:04.300 23:38:15 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:04.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.300 --rc genhtml_branch_coverage=1 00:04:04.300 --rc genhtml_function_coverage=1 00:04:04.300 --rc genhtml_legend=1 00:04:04.300 --rc geninfo_all_blocks=1 00:04:04.300 --rc geninfo_unexecuted_blocks=1 00:04:04.300 00:04:04.301 ' 00:04:04.301 23:38:15 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:04.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.301 --rc genhtml_branch_coverage=1 00:04:04.301 --rc genhtml_function_coverage=1 00:04:04.301 --rc genhtml_legend=1 00:04:04.301 --rc geninfo_all_blocks=1 00:04:04.301 --rc geninfo_unexecuted_blocks=1 00:04:04.301 00:04:04.301 ' 00:04:04.301 23:38:15 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:04.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.301 --rc genhtml_branch_coverage=1 00:04:04.301 --rc genhtml_function_coverage=1 00:04:04.301 --rc genhtml_legend=1 00:04:04.301 --rc geninfo_all_blocks=1 00:04:04.301 --rc geninfo_unexecuted_blocks=1 00:04:04.301 00:04:04.301 ' 00:04:04.301 23:38:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.301 23:38:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.301 23:38:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.301 23:38:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.301 ************************************ 00:04:04.301 START TEST env_memory 00:04:04.301 ************************************ 00:04:04.301 23:38:15 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.301 00:04:04.301 00:04:04.301 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.301 http://cunit.sourceforge.net/ 00:04:04.301 00:04:04.301 00:04:04.301 Suite: memory 00:04:04.301 Test: alloc and free memory map ...[2024-12-06 23:38:15.699228] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:04.301 passed 00:04:04.301 Test: mem map translation ...[2024-12-06 23:38:15.743122] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:04.301 [2024-12-06 23:38:15.743183] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:04.301 [2024-12-06 23:38:15.743237] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:04.301 [2024-12-06 23:38:15.743257] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.301 passed 00:04:04.301 Test: mem map registration ...[2024-12-06 23:38:15.829354] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:04.301 [2024-12-06 23:38:15.829424] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:04.561 passed 00:04:04.561 Test: mem map adjacent registrations ...passed 00:04:04.561 00:04:04.561 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.561 suites 1 1 n/a 0 0 00:04:04.561 tests 4 4 4 0 0 00:04:04.561 asserts 152 152 152 0 n/a 00:04:04.561 00:04:04.561 Elapsed time = 0.273 seconds 00:04:04.561 00:04:04.561 real 0m0.316s 00:04:04.561 user 0m0.276s 00:04:04.561 sys 0m0.032s 00:04:04.561 23:38:15 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.561 23:38:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:04.561 ************************************ 00:04:04.561 END TEST env_memory 00:04:04.561 ************************************ 00:04:04.561 23:38:16 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.561 23:38:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.561 23:38:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.561 23:38:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.561 ************************************ 00:04:04.561 START TEST env_vtophys 00:04:04.561 ************************************ 00:04:04.561 23:38:16 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.561 EAL: lib.eal log level changed from notice to debug 00:04:04.561 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.561 EAL: Detected lcore 1 as core 0 on socket 0 00:04:04.561 EAL: Detected lcore 2 as core 0 on socket 0 00:04:04.561 EAL: Detected lcore 3 as core 0 on socket 0 00:04:04.561 EAL: Detected lcore 4 as core 0 on socket 0 00:04:04.561 EAL: Detected lcore 5 as core 0 on socket 0 00:04:04.561 EAL: Detected lcore 6 as core 0 on socket 0 00:04:04.561 EAL: Detected lcore 7 as core 0 on socket 0 00:04:04.561 EAL: Detected lcore 8 as core 0 on socket 0 00:04:04.561 EAL: Detected lcore 9 as core 0 on socket 0 00:04:04.561 EAL: Maximum logical cores by configuration: 128 00:04:04.561 EAL: Detected CPU lcores: 10 00:04:04.561 EAL: Detected NUMA nodes: 1 00:04:04.561 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:04.561 EAL: Detected shared linkage of DPDK 00:04:04.561 EAL: No shared files mode enabled, IPC will be disabled 00:04:04.561 EAL: Selected IOVA mode 'PA' 00:04:04.561 EAL: Probing VFIO support... 00:04:04.561 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.561 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:04.561 EAL: Ask a virtual area of 0x2e000 bytes 00:04:04.561 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:04.561 EAL: Setting up physically contiguous memory... 00:04:04.561 EAL: Setting maximum number of open files to 524288 00:04:04.561 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:04.561 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:04.561 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.561 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:04.561 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.561 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.561 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:04.561 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:04.561 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.561 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:04.561 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.562 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.562 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:04.562 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:04.562 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.562 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:04.562 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.562 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.562 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:04.562 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:04.562 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.562 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:04.562 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.562 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.562 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:04.562 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:04.562 EAL: Hugepages will be freed exactly as allocated. 00:04:04.562 EAL: No shared files mode enabled, IPC is disabled 00:04:04.562 EAL: No shared files mode enabled, IPC is disabled 00:04:04.826 EAL: TSC frequency is ~2290000 KHz 00:04:04.826 EAL: Main lcore 0 is ready (tid=7f464e0b1a40;cpuset=[0]) 00:04:04.827 EAL: Trying to obtain current memory policy. 00:04:04.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.827 EAL: Restoring previous memory policy: 0 00:04:04.827 EAL: request: mp_malloc_sync 00:04:04.827 EAL: No shared files mode enabled, IPC is disabled 00:04:04.827 EAL: Heap on socket 0 was expanded by 2MB 00:04:04.827 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.827 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.827 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.827 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:04.827 00:04:04.827 00:04:04.827 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.827 http://cunit.sourceforge.net/ 00:04:04.827 00:04:04.827 00:04:04.827 Suite: components_suite 00:04:05.086 Test: vtophys_malloc_test ...passed 00:04:05.086 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:05.086 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.086 EAL: Restoring previous memory policy: 4 00:04:05.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.086 EAL: request: mp_malloc_sync 00:04:05.086 EAL: No shared files mode enabled, IPC is disabled 00:04:05.086 EAL: Heap on socket 0 was expanded by 4MB 00:04:05.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.086 EAL: request: mp_malloc_sync 00:04:05.086 EAL: No shared files mode enabled, IPC is disabled 00:04:05.086 EAL: Heap on socket 0 was shrunk by 4MB 00:04:05.086 EAL: Trying to obtain current memory policy. 00:04:05.086 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.086 EAL: Restoring previous memory policy: 4 00:04:05.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.086 EAL: request: mp_malloc_sync 00:04:05.086 EAL: No shared files mode enabled, IPC is disabled 00:04:05.086 EAL: Heap on socket 0 was expanded by 6MB 00:04:05.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.086 EAL: request: mp_malloc_sync 00:04:05.086 EAL: No shared files mode enabled, IPC is disabled 00:04:05.086 EAL: Heap on socket 0 was shrunk by 6MB 00:04:05.086 EAL: Trying to obtain current memory policy. 00:04:05.086 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.086 EAL: Restoring previous memory policy: 4 00:04:05.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.086 EAL: request: mp_malloc_sync 00:04:05.086 EAL: No shared files mode enabled, IPC is disabled 00:04:05.086 EAL: Heap on socket 0 was expanded by 10MB 00:04:05.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.086 EAL: request: mp_malloc_sync 00:04:05.086 EAL: No shared files mode enabled, IPC is disabled 00:04:05.086 EAL: Heap on socket 0 was shrunk by 10MB 00:04:05.345 EAL: Trying to obtain current memory policy. 00:04:05.345 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.345 EAL: Restoring previous memory policy: 4 00:04:05.345 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.345 EAL: request: mp_malloc_sync 00:04:05.345 EAL: No shared files mode enabled, IPC is disabled 00:04:05.345 EAL: Heap on socket 0 was expanded by 18MB 00:04:05.345 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.345 EAL: request: mp_malloc_sync 00:04:05.345 EAL: No shared files mode enabled, IPC is disabled 00:04:05.345 EAL: Heap on socket 0 was shrunk by 18MB 00:04:05.345 EAL: Trying to obtain current memory policy. 00:04:05.345 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.345 EAL: Restoring previous memory policy: 4 00:04:05.345 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.345 EAL: request: mp_malloc_sync 00:04:05.345 EAL: No shared files mode enabled, IPC is disabled 00:04:05.345 EAL: Heap on socket 0 was expanded by 34MB 00:04:05.345 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.345 EAL: request: mp_malloc_sync 00:04:05.345 EAL: No shared files mode enabled, IPC is disabled 00:04:05.345 EAL: Heap on socket 0 was shrunk by 34MB 00:04:05.345 EAL: Trying to obtain current memory policy. 00:04:05.345 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.345 EAL: Restoring previous memory policy: 4 00:04:05.345 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.345 EAL: request: mp_malloc_sync 00:04:05.345 EAL: No shared files mode enabled, IPC is disabled 00:04:05.345 EAL: Heap on socket 0 was expanded by 66MB 00:04:05.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.604 EAL: request: mp_malloc_sync 00:04:05.604 EAL: No shared files mode enabled, IPC is disabled 00:04:05.604 EAL: Heap on socket 0 was shrunk by 66MB 00:04:05.604 EAL: Trying to obtain current memory policy. 00:04:05.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.604 EAL: Restoring previous memory policy: 4 00:04:05.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.604 EAL: request: mp_malloc_sync 00:04:05.604 EAL: No shared files mode enabled, IPC is disabled 00:04:05.604 EAL: Heap on socket 0 was expanded by 130MB 00:04:05.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.862 EAL: request: mp_malloc_sync 00:04:05.862 EAL: No shared files mode enabled, IPC is disabled 00:04:05.862 EAL: Heap on socket 0 was shrunk by 130MB 00:04:06.121 EAL: Trying to obtain current memory policy. 00:04:06.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.380 EAL: Restoring previous memory policy: 4 00:04:06.380 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.380 EAL: request: mp_malloc_sync 00:04:06.380 EAL: No shared files mode enabled, IPC is disabled 00:04:06.380 EAL: Heap on socket 0 was expanded by 258MB 00:04:06.639 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.639 EAL: request: mp_malloc_sync 00:04:06.639 EAL: No shared files mode enabled, IPC is disabled 00:04:06.639 EAL: Heap on socket 0 was shrunk by 258MB 00:04:07.207 EAL: Trying to obtain current memory policy. 00:04:07.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.207 EAL: Restoring previous memory policy: 4 00:04:07.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.207 EAL: request: mp_malloc_sync 00:04:07.207 EAL: No shared files mode enabled, IPC is disabled 00:04:07.207 EAL: Heap on socket 0 was expanded by 514MB 00:04:08.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.589 EAL: request: mp_malloc_sync 00:04:08.589 EAL: No shared files mode enabled, IPC is disabled 00:04:08.589 EAL: Heap on socket 0 was shrunk by 514MB 00:04:09.186 EAL: Trying to obtain current memory policy. 00:04:09.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.445 EAL: Restoring previous memory policy: 4 00:04:09.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.445 EAL: request: mp_malloc_sync 00:04:09.445 EAL: No shared files mode enabled, IPC is disabled 00:04:09.445 EAL: Heap on socket 0 was expanded by 1026MB 00:04:11.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.604 EAL: request: mp_malloc_sync 00:04:11.604 EAL: No shared files mode enabled, IPC is disabled 00:04:11.604 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:13.505 passed 00:04:13.505 00:04:13.505 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.505 suites 1 1 n/a 0 0 00:04:13.505 tests 2 2 2 0 0 00:04:13.505 asserts 5705 5705 5705 0 n/a 00:04:13.505 00:04:13.505 Elapsed time = 8.482 seconds 00:04:13.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.505 EAL: request: mp_malloc_sync 00:04:13.505 EAL: No shared files mode enabled, IPC is disabled 00:04:13.505 EAL: Heap on socket 0 was shrunk by 2MB 00:04:13.505 EAL: No shared files mode enabled, IPC is disabled 00:04:13.505 EAL: No shared files mode enabled, IPC is disabled 00:04:13.505 EAL: No shared files mode enabled, IPC is disabled 00:04:13.505 00:04:13.505 real 0m8.803s 00:04:13.505 user 0m7.826s 00:04:13.505 sys 0m0.821s 00:04:13.505 23:38:24 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.505 23:38:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:13.505 ************************************ 00:04:13.505 END TEST env_vtophys 00:04:13.505 ************************************ 00:04:13.505 23:38:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.505 23:38:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.505 23:38:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.505 23:38:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.505 ************************************ 00:04:13.505 START TEST env_pci 00:04:13.505 ************************************ 00:04:13.505 23:38:24 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.505 00:04:13.505 00:04:13.505 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.505 http://cunit.sourceforge.net/ 00:04:13.505 00:04:13.505 00:04:13.505 Suite: pci 00:04:13.505 Test: pci_hook ...[2024-12-06 23:38:24.923863] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56766 has claimed it 00:04:13.505 passed 00:04:13.505 00:04:13.505 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.505 suites 1 1 n/a 0 0 00:04:13.505 tests 1 1 1 0 0 00:04:13.505 asserts 25 25 25 0 n/a 00:04:13.505 00:04:13.505 Elapsed time = 0.005 seconds 00:04:13.505 EAL: Cannot find device (10000:00:01.0) 00:04:13.505 EAL: Failed to attach device on primary process 00:04:13.505 00:04:13.505 real 0m0.101s 00:04:13.505 user 0m0.045s 00:04:13.505 sys 0m0.055s 00:04:13.505 23:38:24 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.505 23:38:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:13.505 ************************************ 00:04:13.505 END TEST env_pci 00:04:13.505 ************************************ 00:04:13.505 23:38:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:13.505 23:38:25 env -- env/env.sh@15 -- # uname 00:04:13.505 23:38:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:13.505 23:38:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:13.505 23:38:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.505 23:38:25 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:13.505 23:38:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.505 23:38:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.505 ************************************ 00:04:13.505 START TEST env_dpdk_post_init 00:04:13.505 ************************************ 00:04:13.505 23:38:25 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.773 EAL: Detected CPU lcores: 10 00:04:13.773 EAL: Detected NUMA nodes: 1 00:04:13.773 EAL: Detected shared linkage of DPDK 00:04:13.773 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.773 EAL: Selected IOVA mode 'PA' 00:04:13.773 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.773 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:13.773 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:13.773 Starting DPDK initialization... 00:04:13.773 Starting SPDK post initialization... 00:04:13.773 SPDK NVMe probe 00:04:13.773 Attaching to 0000:00:10.0 00:04:13.773 Attaching to 0000:00:11.0 00:04:13.773 Attached to 0000:00:10.0 00:04:13.773 Attached to 0000:00:11.0 00:04:13.773 Cleaning up... 00:04:14.030 00:04:14.030 real 0m0.286s 00:04:14.030 user 0m0.093s 00:04:14.030 sys 0m0.094s 00:04:14.030 23:38:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.030 23:38:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.030 ************************************ 00:04:14.030 END TEST env_dpdk_post_init 00:04:14.030 ************************************ 00:04:14.030 23:38:25 env -- env/env.sh@26 -- # uname 00:04:14.030 23:38:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.030 23:38:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.030 23:38:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.030 23:38:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.030 23:38:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.030 ************************************ 00:04:14.030 START TEST env_mem_callbacks 00:04:14.030 ************************************ 00:04:14.030 23:38:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.030 EAL: Detected CPU lcores: 10 00:04:14.030 EAL: Detected NUMA nodes: 1 00:04:14.030 EAL: Detected shared linkage of DPDK 00:04:14.030 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.030 EAL: Selected IOVA mode 'PA' 00:04:14.289 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.289 00:04:14.289 00:04:14.289 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.289 http://cunit.sourceforge.net/ 00:04:14.289 00:04:14.289 00:04:14.289 Suite: memory 00:04:14.289 Test: test ... 00:04:14.289 register 0x200000200000 2097152 00:04:14.289 malloc 3145728 00:04:14.289 register 0x200000400000 4194304 00:04:14.289 buf 0x2000004fffc0 len 3145728 PASSED 00:04:14.289 malloc 64 00:04:14.289 buf 0x2000004ffec0 len 64 PASSED 00:04:14.289 malloc 4194304 00:04:14.289 register 0x200000800000 6291456 00:04:14.289 buf 0x2000009fffc0 len 4194304 PASSED 00:04:14.289 free 0x2000004fffc0 3145728 00:04:14.289 free 0x2000004ffec0 64 00:04:14.289 unregister 0x200000400000 4194304 PASSED 00:04:14.289 free 0x2000009fffc0 4194304 00:04:14.289 unregister 0x200000800000 6291456 PASSED 00:04:14.289 malloc 8388608 00:04:14.289 register 0x200000400000 10485760 00:04:14.289 buf 0x2000005fffc0 len 8388608 PASSED 00:04:14.289 free 0x2000005fffc0 8388608 00:04:14.289 unregister 0x200000400000 10485760 PASSED 00:04:14.289 passed 00:04:14.289 00:04:14.289 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.289 suites 1 1 n/a 0 0 00:04:14.289 tests 1 1 1 0 0 00:04:14.289 asserts 15 15 15 0 n/a 00:04:14.289 00:04:14.289 Elapsed time = 0.091 seconds 00:04:14.289 00:04:14.289 real 0m0.294s 00:04:14.289 user 0m0.118s 00:04:14.289 sys 0m0.074s 00:04:14.289 23:38:25 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.289 23:38:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:14.289 ************************************ 00:04:14.289 END TEST env_mem_callbacks 00:04:14.289 ************************************ 00:04:14.289 ************************************ 00:04:14.289 END TEST env 00:04:14.289 ************************************ 00:04:14.289 00:04:14.289 real 0m10.337s 00:04:14.289 user 0m8.577s 00:04:14.289 sys 0m1.416s 00:04:14.289 23:38:25 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.289 23:38:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.289 23:38:25 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:14.289 23:38:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.289 23:38:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.289 23:38:25 -- common/autotest_common.sh@10 -- # set +x 00:04:14.289 ************************************ 00:04:14.289 START TEST rpc 00:04:14.289 ************************************ 00:04:14.289 23:38:25 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:14.548 * Looking for test storage... 00:04:14.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:14.548 23:38:25 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:14.548 23:38:25 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:14.548 23:38:25 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:14.548 23:38:26 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:14.548 23:38:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.548 23:38:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.548 23:38:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.548 23:38:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.548 23:38:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.548 23:38:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.548 23:38:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.548 23:38:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.548 23:38:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.548 23:38:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.548 23:38:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.548 23:38:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:14.548 23:38:26 rpc -- scripts/common.sh@345 -- # : 1 00:04:14.548 23:38:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.548 23:38:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.548 23:38:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:14.548 23:38:26 rpc -- scripts/common.sh@353 -- # local d=1 00:04:14.548 23:38:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.548 23:38:26 rpc -- scripts/common.sh@355 -- # echo 1 00:04:14.548 23:38:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.548 23:38:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:14.548 23:38:26 rpc -- scripts/common.sh@353 -- # local d=2 00:04:14.548 23:38:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.548 23:38:26 rpc -- scripts/common.sh@355 -- # echo 2 00:04:14.548 23:38:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.548 23:38:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.548 23:38:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.548 23:38:26 rpc -- scripts/common.sh@368 -- # return 0 00:04:14.548 23:38:26 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.548 23:38:26 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:14.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.548 --rc genhtml_branch_coverage=1 00:04:14.548 --rc genhtml_function_coverage=1 00:04:14.548 --rc genhtml_legend=1 00:04:14.548 --rc geninfo_all_blocks=1 00:04:14.548 --rc geninfo_unexecuted_blocks=1 00:04:14.548 00:04:14.548 ' 00:04:14.548 23:38:26 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:14.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.548 --rc genhtml_branch_coverage=1 00:04:14.548 --rc genhtml_function_coverage=1 00:04:14.548 --rc genhtml_legend=1 00:04:14.548 --rc geninfo_all_blocks=1 00:04:14.548 --rc geninfo_unexecuted_blocks=1 00:04:14.548 00:04:14.548 ' 00:04:14.548 23:38:26 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:14.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.548 --rc genhtml_branch_coverage=1 00:04:14.548 --rc genhtml_function_coverage=1 00:04:14.548 --rc genhtml_legend=1 00:04:14.548 --rc geninfo_all_blocks=1 00:04:14.548 --rc geninfo_unexecuted_blocks=1 00:04:14.548 00:04:14.548 ' 00:04:14.548 23:38:26 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:14.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.548 --rc genhtml_branch_coverage=1 00:04:14.548 --rc genhtml_function_coverage=1 00:04:14.548 --rc genhtml_legend=1 00:04:14.548 --rc geninfo_all_blocks=1 00:04:14.549 --rc geninfo_unexecuted_blocks=1 00:04:14.549 00:04:14.549 ' 00:04:14.549 23:38:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56893 00:04:14.549 23:38:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:14.549 23:38:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.549 23:38:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56893 00:04:14.549 23:38:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 56893 ']' 00:04:14.549 23:38:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.549 23:38:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.549 23:38:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.549 23:38:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.549 23:38:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.807 [2024-12-06 23:38:26.150997] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:04:14.807 [2024-12-06 23:38:26.151121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56893 ] 00:04:14.807 [2024-12-06 23:38:26.329454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.067 [2024-12-06 23:38:26.441615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:15.067 [2024-12-06 23:38:26.441689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56893' to capture a snapshot of events at runtime. 00:04:15.067 [2024-12-06 23:38:26.441699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:15.067 [2024-12-06 23:38:26.441709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:15.067 [2024-12-06 23:38:26.441717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56893 for offline analysis/debug. 00:04:15.067 [2024-12-06 23:38:26.442956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.006 23:38:27 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.006 23:38:27 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:16.006 23:38:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.006 23:38:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.006 23:38:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:16.006 23:38:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:16.006 23:38:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.006 23:38:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.006 23:38:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.006 ************************************ 00:04:16.006 START TEST rpc_integrity 00:04:16.006 ************************************ 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.007 { 00:04:16.007 "name": "Malloc0", 00:04:16.007 "aliases": [ 00:04:16.007 "45bef6ec-6f39-47ac-b227-e76006521b18" 00:04:16.007 ], 00:04:16.007 "product_name": "Malloc disk", 00:04:16.007 "block_size": 512, 00:04:16.007 "num_blocks": 16384, 00:04:16.007 "uuid": "45bef6ec-6f39-47ac-b227-e76006521b18", 00:04:16.007 "assigned_rate_limits": { 00:04:16.007 "rw_ios_per_sec": 0, 00:04:16.007 "rw_mbytes_per_sec": 0, 00:04:16.007 "r_mbytes_per_sec": 0, 00:04:16.007 "w_mbytes_per_sec": 0 00:04:16.007 }, 00:04:16.007 "claimed": false, 00:04:16.007 "zoned": false, 00:04:16.007 "supported_io_types": { 00:04:16.007 "read": true, 00:04:16.007 "write": true, 00:04:16.007 "unmap": true, 00:04:16.007 "flush": true, 00:04:16.007 "reset": true, 00:04:16.007 "nvme_admin": false, 00:04:16.007 "nvme_io": false, 00:04:16.007 "nvme_io_md": false, 00:04:16.007 "write_zeroes": true, 00:04:16.007 "zcopy": true, 00:04:16.007 "get_zone_info": false, 00:04:16.007 "zone_management": false, 00:04:16.007 "zone_append": false, 00:04:16.007 "compare": false, 00:04:16.007 "compare_and_write": false, 00:04:16.007 "abort": true, 00:04:16.007 "seek_hole": false, 00:04:16.007 "seek_data": false, 00:04:16.007 "copy": true, 00:04:16.007 "nvme_iov_md": false 00:04:16.007 }, 00:04:16.007 "memory_domains": [ 00:04:16.007 { 00:04:16.007 "dma_device_id": "system", 00:04:16.007 "dma_device_type": 1 00:04:16.007 }, 00:04:16.007 { 00:04:16.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.007 "dma_device_type": 2 00:04:16.007 } 00:04:16.007 ], 00:04:16.007 "driver_specific": {} 00:04:16.007 } 00:04:16.007 ]' 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.007 [2024-12-06 23:38:27.518287] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:16.007 [2024-12-06 23:38:27.518366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.007 [2024-12-06 23:38:27.518392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:16.007 [2024-12-06 23:38:27.518407] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.007 [2024-12-06 23:38:27.520691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.007 [2024-12-06 23:38:27.520735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.007 Passthru0 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.007 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.007 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.007 { 00:04:16.007 "name": "Malloc0", 00:04:16.007 "aliases": [ 00:04:16.007 "45bef6ec-6f39-47ac-b227-e76006521b18" 00:04:16.007 ], 00:04:16.007 "product_name": "Malloc disk", 00:04:16.007 "block_size": 512, 00:04:16.007 "num_blocks": 16384, 00:04:16.007 "uuid": "45bef6ec-6f39-47ac-b227-e76006521b18", 00:04:16.007 "assigned_rate_limits": { 00:04:16.007 "rw_ios_per_sec": 0, 00:04:16.007 "rw_mbytes_per_sec": 0, 00:04:16.007 "r_mbytes_per_sec": 0, 00:04:16.007 "w_mbytes_per_sec": 0 00:04:16.007 }, 00:04:16.007 "claimed": true, 00:04:16.007 "claim_type": "exclusive_write", 00:04:16.007 "zoned": false, 00:04:16.007 "supported_io_types": { 00:04:16.007 "read": true, 00:04:16.007 "write": true, 00:04:16.007 "unmap": true, 00:04:16.007 "flush": true, 00:04:16.007 "reset": true, 00:04:16.007 "nvme_admin": false, 00:04:16.007 "nvme_io": false, 00:04:16.007 "nvme_io_md": false, 00:04:16.007 "write_zeroes": true, 00:04:16.007 "zcopy": true, 00:04:16.007 "get_zone_info": false, 00:04:16.007 "zone_management": false, 00:04:16.007 "zone_append": false, 00:04:16.007 "compare": false, 00:04:16.007 "compare_and_write": false, 00:04:16.007 "abort": true, 00:04:16.007 "seek_hole": false, 00:04:16.007 "seek_data": false, 00:04:16.007 "copy": true, 00:04:16.007 "nvme_iov_md": false 00:04:16.007 }, 00:04:16.007 "memory_domains": [ 00:04:16.007 { 00:04:16.007 "dma_device_id": "system", 00:04:16.007 "dma_device_type": 1 00:04:16.007 }, 00:04:16.007 { 00:04:16.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.007 "dma_device_type": 2 00:04:16.007 } 00:04:16.007 ], 00:04:16.007 "driver_specific": {} 00:04:16.007 }, 00:04:16.007 { 00:04:16.007 "name": "Passthru0", 00:04:16.007 "aliases": [ 00:04:16.007 "b9dafeee-a59d-5308-9b89-e5ba9df3279a" 00:04:16.007 ], 00:04:16.007 "product_name": "passthru", 00:04:16.007 "block_size": 512, 00:04:16.007 "num_blocks": 16384, 00:04:16.007 "uuid": "b9dafeee-a59d-5308-9b89-e5ba9df3279a", 00:04:16.007 "assigned_rate_limits": { 00:04:16.007 "rw_ios_per_sec": 0, 00:04:16.007 "rw_mbytes_per_sec": 0, 00:04:16.007 "r_mbytes_per_sec": 0, 00:04:16.007 "w_mbytes_per_sec": 0 00:04:16.008 }, 00:04:16.008 "claimed": false, 00:04:16.008 "zoned": false, 00:04:16.008 "supported_io_types": { 00:04:16.008 "read": true, 00:04:16.008 "write": true, 00:04:16.008 "unmap": true, 00:04:16.008 "flush": true, 00:04:16.008 "reset": true, 00:04:16.008 "nvme_admin": false, 00:04:16.008 "nvme_io": false, 00:04:16.008 "nvme_io_md": false, 00:04:16.008 "write_zeroes": true, 00:04:16.008 "zcopy": true, 00:04:16.008 "get_zone_info": false, 00:04:16.008 "zone_management": false, 00:04:16.008 "zone_append": false, 00:04:16.008 "compare": false, 00:04:16.008 "compare_and_write": false, 00:04:16.008 "abort": true, 00:04:16.008 "seek_hole": false, 00:04:16.008 "seek_data": false, 00:04:16.008 "copy": true, 00:04:16.008 "nvme_iov_md": false 00:04:16.008 }, 00:04:16.008 "memory_domains": [ 00:04:16.008 { 00:04:16.008 "dma_device_id": "system", 00:04:16.008 "dma_device_type": 1 00:04:16.008 }, 00:04:16.008 { 00:04:16.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.008 "dma_device_type": 2 00:04:16.008 } 00:04:16.008 ], 00:04:16.008 "driver_specific": { 00:04:16.008 "passthru": { 00:04:16.008 "name": "Passthru0", 00:04:16.008 "base_bdev_name": "Malloc0" 00:04:16.008 } 00:04:16.008 } 00:04:16.008 } 00:04:16.008 ]' 00:04:16.008 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.267 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.267 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.267 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.267 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.267 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.267 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:16.267 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.267 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.267 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.267 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.267 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.267 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.267 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.267 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.267 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.267 23:38:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.267 00:04:16.267 real 0m0.337s 00:04:16.267 user 0m0.190s 00:04:16.267 sys 0m0.045s 00:04:16.267 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.267 23:38:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.267 ************************************ 00:04:16.267 END TEST rpc_integrity 00:04:16.267 ************************************ 00:04:16.267 23:38:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:16.267 23:38:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.267 23:38:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.267 23:38:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.267 ************************************ 00:04:16.267 START TEST rpc_plugins 00:04:16.267 ************************************ 00:04:16.267 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:16.267 23:38:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:16.267 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.267 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.267 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.267 23:38:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:16.267 23:38:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:16.267 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.267 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.267 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.267 23:38:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:16.267 { 00:04:16.267 "name": "Malloc1", 00:04:16.267 "aliases": [ 00:04:16.267 "c97d2b35-0fbc-424d-9e50-e78cf61673ca" 00:04:16.267 ], 00:04:16.267 "product_name": "Malloc disk", 00:04:16.267 "block_size": 4096, 00:04:16.267 "num_blocks": 256, 00:04:16.267 "uuid": "c97d2b35-0fbc-424d-9e50-e78cf61673ca", 00:04:16.267 "assigned_rate_limits": { 00:04:16.267 "rw_ios_per_sec": 0, 00:04:16.267 "rw_mbytes_per_sec": 0, 00:04:16.267 "r_mbytes_per_sec": 0, 00:04:16.267 "w_mbytes_per_sec": 0 00:04:16.267 }, 00:04:16.267 "claimed": false, 00:04:16.267 "zoned": false, 00:04:16.267 "supported_io_types": { 00:04:16.267 "read": true, 00:04:16.267 "write": true, 00:04:16.267 "unmap": true, 00:04:16.267 "flush": true, 00:04:16.267 "reset": true, 00:04:16.267 "nvme_admin": false, 00:04:16.267 "nvme_io": false, 00:04:16.267 "nvme_io_md": false, 00:04:16.267 "write_zeroes": true, 00:04:16.267 "zcopy": true, 00:04:16.267 "get_zone_info": false, 00:04:16.267 "zone_management": false, 00:04:16.267 "zone_append": false, 00:04:16.267 "compare": false, 00:04:16.267 "compare_and_write": false, 00:04:16.267 "abort": true, 00:04:16.267 "seek_hole": false, 00:04:16.267 "seek_data": false, 00:04:16.267 "copy": true, 00:04:16.267 "nvme_iov_md": false 00:04:16.267 }, 00:04:16.267 "memory_domains": [ 00:04:16.267 { 00:04:16.267 "dma_device_id": "system", 00:04:16.267 "dma_device_type": 1 00:04:16.267 }, 00:04:16.267 { 00:04:16.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.267 "dma_device_type": 2 00:04:16.267 } 00:04:16.267 ], 00:04:16.267 "driver_specific": {} 00:04:16.267 } 00:04:16.267 ]' 00:04:16.267 23:38:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:16.527 23:38:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:16.527 23:38:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:16.527 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.527 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.527 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.527 23:38:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:16.527 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.527 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.527 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.527 23:38:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:16.527 23:38:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:16.527 23:38:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:16.527 00:04:16.527 real 0m0.165s 00:04:16.527 user 0m0.092s 00:04:16.527 sys 0m0.027s 00:04:16.527 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.527 23:38:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.527 ************************************ 00:04:16.527 END TEST rpc_plugins 00:04:16.527 ************************************ 00:04:16.527 23:38:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:16.527 23:38:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.527 23:38:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.527 23:38:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.527 ************************************ 00:04:16.527 START TEST rpc_trace_cmd_test 00:04:16.527 ************************************ 00:04:16.527 23:38:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:16.527 23:38:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:16.527 23:38:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:16.527 23:38:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.527 23:38:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.527 23:38:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.527 23:38:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:16.527 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56893", 00:04:16.527 "tpoint_group_mask": "0x8", 00:04:16.527 "iscsi_conn": { 00:04:16.527 "mask": "0x2", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "scsi": { 00:04:16.527 "mask": "0x4", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "bdev": { 00:04:16.527 "mask": "0x8", 00:04:16.527 "tpoint_mask": "0xffffffffffffffff" 00:04:16.527 }, 00:04:16.527 "nvmf_rdma": { 00:04:16.527 "mask": "0x10", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "nvmf_tcp": { 00:04:16.527 "mask": "0x20", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "ftl": { 00:04:16.527 "mask": "0x40", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "blobfs": { 00:04:16.527 "mask": "0x80", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "dsa": { 00:04:16.527 "mask": "0x200", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "thread": { 00:04:16.527 "mask": "0x400", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "nvme_pcie": { 00:04:16.527 "mask": "0x800", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "iaa": { 00:04:16.527 "mask": "0x1000", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "nvme_tcp": { 00:04:16.527 "mask": "0x2000", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "bdev_nvme": { 00:04:16.527 "mask": "0x4000", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "sock": { 00:04:16.527 "mask": "0x8000", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "blob": { 00:04:16.527 "mask": "0x10000", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "bdev_raid": { 00:04:16.527 "mask": "0x20000", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 }, 00:04:16.527 "scheduler": { 00:04:16.527 "mask": "0x40000", 00:04:16.527 "tpoint_mask": "0x0" 00:04:16.527 } 00:04:16.527 }' 00:04:16.527 23:38:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:16.527 23:38:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:16.527 23:38:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:16.786 23:38:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:16.787 23:38:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:16.787 23:38:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:16.787 23:38:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:16.787 23:38:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:16.787 23:38:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:16.787 23:38:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:16.787 00:04:16.787 real 0m0.215s 00:04:16.787 user 0m0.172s 00:04:16.787 sys 0m0.036s 00:04:16.787 23:38:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.787 23:38:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.787 ************************************ 00:04:16.787 END TEST rpc_trace_cmd_test 00:04:16.787 ************************************ 00:04:16.787 23:38:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:16.787 23:38:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:16.787 23:38:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:16.787 23:38:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.787 23:38:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.787 23:38:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.787 ************************************ 00:04:16.787 START TEST rpc_daemon_integrity 00:04:16.787 ************************************ 00:04:16.787 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:16.787 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.787 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.787 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.787 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.787 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.787 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.787 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.787 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.787 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.787 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.046 { 00:04:17.046 "name": "Malloc2", 00:04:17.046 "aliases": [ 00:04:17.046 "5245142d-2341-4fd1-a50a-2fd412405125" 00:04:17.046 ], 00:04:17.046 "product_name": "Malloc disk", 00:04:17.046 "block_size": 512, 00:04:17.046 "num_blocks": 16384, 00:04:17.046 "uuid": "5245142d-2341-4fd1-a50a-2fd412405125", 00:04:17.046 "assigned_rate_limits": { 00:04:17.046 "rw_ios_per_sec": 0, 00:04:17.046 "rw_mbytes_per_sec": 0, 00:04:17.046 "r_mbytes_per_sec": 0, 00:04:17.046 "w_mbytes_per_sec": 0 00:04:17.046 }, 00:04:17.046 "claimed": false, 00:04:17.046 "zoned": false, 00:04:17.046 "supported_io_types": { 00:04:17.046 "read": true, 00:04:17.046 "write": true, 00:04:17.046 "unmap": true, 00:04:17.046 "flush": true, 00:04:17.046 "reset": true, 00:04:17.046 "nvme_admin": false, 00:04:17.046 "nvme_io": false, 00:04:17.046 "nvme_io_md": false, 00:04:17.046 "write_zeroes": true, 00:04:17.046 "zcopy": true, 00:04:17.046 "get_zone_info": false, 00:04:17.046 "zone_management": false, 00:04:17.046 "zone_append": false, 00:04:17.046 "compare": false, 00:04:17.046 "compare_and_write": false, 00:04:17.046 "abort": true, 00:04:17.046 "seek_hole": false, 00:04:17.046 "seek_data": false, 00:04:17.046 "copy": true, 00:04:17.046 "nvme_iov_md": false 00:04:17.046 }, 00:04:17.046 "memory_domains": [ 00:04:17.046 { 00:04:17.046 "dma_device_id": "system", 00:04:17.046 "dma_device_type": 1 00:04:17.046 }, 00:04:17.046 { 00:04:17.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.046 "dma_device_type": 2 00:04:17.046 } 00:04:17.046 ], 00:04:17.046 "driver_specific": {} 00:04:17.046 } 00:04:17.046 ]' 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.046 [2024-12-06 23:38:28.439034] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:17.046 [2024-12-06 23:38:28.439110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.046 [2024-12-06 23:38:28.439134] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:17.046 [2024-12-06 23:38:28.439146] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.046 [2024-12-06 23:38:28.441448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.046 [2024-12-06 23:38:28.441492] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.046 Passthru0 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.046 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.047 { 00:04:17.047 "name": "Malloc2", 00:04:17.047 "aliases": [ 00:04:17.047 "5245142d-2341-4fd1-a50a-2fd412405125" 00:04:17.047 ], 00:04:17.047 "product_name": "Malloc disk", 00:04:17.047 "block_size": 512, 00:04:17.047 "num_blocks": 16384, 00:04:17.047 "uuid": "5245142d-2341-4fd1-a50a-2fd412405125", 00:04:17.047 "assigned_rate_limits": { 00:04:17.047 "rw_ios_per_sec": 0, 00:04:17.047 "rw_mbytes_per_sec": 0, 00:04:17.047 "r_mbytes_per_sec": 0, 00:04:17.047 "w_mbytes_per_sec": 0 00:04:17.047 }, 00:04:17.047 "claimed": true, 00:04:17.047 "claim_type": "exclusive_write", 00:04:17.047 "zoned": false, 00:04:17.047 "supported_io_types": { 00:04:17.047 "read": true, 00:04:17.047 "write": true, 00:04:17.047 "unmap": true, 00:04:17.047 "flush": true, 00:04:17.047 "reset": true, 00:04:17.047 "nvme_admin": false, 00:04:17.047 "nvme_io": false, 00:04:17.047 "nvme_io_md": false, 00:04:17.047 "write_zeroes": true, 00:04:17.047 "zcopy": true, 00:04:17.047 "get_zone_info": false, 00:04:17.047 "zone_management": false, 00:04:17.047 "zone_append": false, 00:04:17.047 "compare": false, 00:04:17.047 "compare_and_write": false, 00:04:17.047 "abort": true, 00:04:17.047 "seek_hole": false, 00:04:17.047 "seek_data": false, 00:04:17.047 "copy": true, 00:04:17.047 "nvme_iov_md": false 00:04:17.047 }, 00:04:17.047 "memory_domains": [ 00:04:17.047 { 00:04:17.047 "dma_device_id": "system", 00:04:17.047 "dma_device_type": 1 00:04:17.047 }, 00:04:17.047 { 00:04:17.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.047 "dma_device_type": 2 00:04:17.047 } 00:04:17.047 ], 00:04:17.047 "driver_specific": {} 00:04:17.047 }, 00:04:17.047 { 00:04:17.047 "name": "Passthru0", 00:04:17.047 "aliases": [ 00:04:17.047 "44240d72-f191-5eda-86d7-994a7da6bcec" 00:04:17.047 ], 00:04:17.047 "product_name": "passthru", 00:04:17.047 "block_size": 512, 00:04:17.047 "num_blocks": 16384, 00:04:17.047 "uuid": "44240d72-f191-5eda-86d7-994a7da6bcec", 00:04:17.047 "assigned_rate_limits": { 00:04:17.047 "rw_ios_per_sec": 0, 00:04:17.047 "rw_mbytes_per_sec": 0, 00:04:17.047 "r_mbytes_per_sec": 0, 00:04:17.047 "w_mbytes_per_sec": 0 00:04:17.047 }, 00:04:17.047 "claimed": false, 00:04:17.047 "zoned": false, 00:04:17.047 "supported_io_types": { 00:04:17.047 "read": true, 00:04:17.047 "write": true, 00:04:17.047 "unmap": true, 00:04:17.047 "flush": true, 00:04:17.047 "reset": true, 00:04:17.047 "nvme_admin": false, 00:04:17.047 "nvme_io": false, 00:04:17.047 "nvme_io_md": false, 00:04:17.047 "write_zeroes": true, 00:04:17.047 "zcopy": true, 00:04:17.047 "get_zone_info": false, 00:04:17.047 "zone_management": false, 00:04:17.047 "zone_append": false, 00:04:17.047 "compare": false, 00:04:17.047 "compare_and_write": false, 00:04:17.047 "abort": true, 00:04:17.047 "seek_hole": false, 00:04:17.047 "seek_data": false, 00:04:17.047 "copy": true, 00:04:17.047 "nvme_iov_md": false 00:04:17.047 }, 00:04:17.047 "memory_domains": [ 00:04:17.047 { 00:04:17.047 "dma_device_id": "system", 00:04:17.047 "dma_device_type": 1 00:04:17.047 }, 00:04:17.047 { 00:04:17.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.047 "dma_device_type": 2 00:04:17.047 } 00:04:17.047 ], 00:04:17.047 "driver_specific": { 00:04:17.047 "passthru": { 00:04:17.047 "name": "Passthru0", 00:04:17.047 "base_bdev_name": "Malloc2" 00:04:17.047 } 00:04:17.047 } 00:04:17.047 } 00:04:17.047 ]' 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.047 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.306 23:38:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.306 00:04:17.306 real 0m0.335s 00:04:17.306 user 0m0.182s 00:04:17.306 sys 0m0.054s 00:04:17.306 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.306 23:38:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.306 ************************************ 00:04:17.306 END TEST rpc_daemon_integrity 00:04:17.306 ************************************ 00:04:17.306 23:38:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:17.306 23:38:28 rpc -- rpc/rpc.sh@84 -- # killprocess 56893 00:04:17.306 23:38:28 rpc -- common/autotest_common.sh@954 -- # '[' -z 56893 ']' 00:04:17.306 23:38:28 rpc -- common/autotest_common.sh@958 -- # kill -0 56893 00:04:17.306 23:38:28 rpc -- common/autotest_common.sh@959 -- # uname 00:04:17.306 23:38:28 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.306 23:38:28 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56893 00:04:17.306 23:38:28 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.306 23:38:28 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.306 killing process with pid 56893 00:04:17.306 23:38:28 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56893' 00:04:17.306 23:38:28 rpc -- common/autotest_common.sh@973 -- # kill 56893 00:04:17.306 23:38:28 rpc -- common/autotest_common.sh@978 -- # wait 56893 00:04:19.900 00:04:19.900 real 0m5.272s 00:04:19.900 user 0m5.802s 00:04:19.900 sys 0m0.911s 00:04:19.900 23:38:31 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.900 23:38:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.900 ************************************ 00:04:19.900 END TEST rpc 00:04:19.900 ************************************ 00:04:19.900 23:38:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.900 23:38:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.900 23:38:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.900 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:04:19.900 ************************************ 00:04:19.900 START TEST skip_rpc 00:04:19.900 ************************************ 00:04:19.900 23:38:31 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.900 * Looking for test storage... 00:04:19.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.900 23:38:31 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:19.900 23:38:31 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:19.900 23:38:31 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:19.900 23:38:31 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.900 23:38:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:19.900 23:38:31 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.900 23:38:31 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:19.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.900 --rc genhtml_branch_coverage=1 00:04:19.900 --rc genhtml_function_coverage=1 00:04:19.900 --rc genhtml_legend=1 00:04:19.900 --rc geninfo_all_blocks=1 00:04:19.900 --rc geninfo_unexecuted_blocks=1 00:04:19.900 00:04:19.900 ' 00:04:19.900 23:38:31 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:19.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.900 --rc genhtml_branch_coverage=1 00:04:19.900 --rc genhtml_function_coverage=1 00:04:19.900 --rc genhtml_legend=1 00:04:19.900 --rc geninfo_all_blocks=1 00:04:19.900 --rc geninfo_unexecuted_blocks=1 00:04:19.900 00:04:19.900 ' 00:04:19.900 23:38:31 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:19.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.900 --rc genhtml_branch_coverage=1 00:04:19.900 --rc genhtml_function_coverage=1 00:04:19.900 --rc genhtml_legend=1 00:04:19.900 --rc geninfo_all_blocks=1 00:04:19.900 --rc geninfo_unexecuted_blocks=1 00:04:19.900 00:04:19.900 ' 00:04:19.900 23:38:31 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:19.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.900 --rc genhtml_branch_coverage=1 00:04:19.900 --rc genhtml_function_coverage=1 00:04:19.900 --rc genhtml_legend=1 00:04:19.900 --rc geninfo_all_blocks=1 00:04:19.900 --rc geninfo_unexecuted_blocks=1 00:04:19.900 00:04:19.900 ' 00:04:19.901 23:38:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.901 23:38:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:19.901 23:38:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:19.901 23:38:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.901 23:38:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.901 23:38:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.901 ************************************ 00:04:19.901 START TEST skip_rpc 00:04:19.901 ************************************ 00:04:19.901 23:38:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:19.901 23:38:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57128 00:04:19.901 23:38:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:19.901 23:38:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.901 23:38:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:20.160 [2024-12-06 23:38:31.485793] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:04:20.160 [2024-12-06 23:38:31.485914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57128 ] 00:04:20.160 [2024-12-06 23:38:31.657677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.419 [2024-12-06 23:38:31.766304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57128 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57128 ']' 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57128 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57128 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57128' 00:04:25.762 killing process with pid 57128 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57128 00:04:25.762 23:38:36 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57128 00:04:27.673 00:04:27.673 real 0m7.492s 00:04:27.673 user 0m7.022s 00:04:27.673 sys 0m0.391s 00:04:27.673 23:38:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.673 23:38:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.673 ************************************ 00:04:27.673 END TEST skip_rpc 00:04:27.673 ************************************ 00:04:27.673 23:38:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:27.673 23:38:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.673 23:38:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.673 23:38:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.673 ************************************ 00:04:27.673 START TEST skip_rpc_with_json 00:04:27.673 ************************************ 00:04:27.673 23:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:27.673 23:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:27.673 23:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57232 00:04:27.673 23:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.673 23:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.673 23:38:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57232 00:04:27.673 23:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57232 ']' 00:04:27.673 23:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.673 23:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.673 23:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.673 23:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.673 23:38:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.673 [2024-12-06 23:38:39.046947] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:04:27.673 [2024-12-06 23:38:39.047104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57232 ] 00:04:27.673 [2024-12-06 23:38:39.222498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.932 [2024-12-06 23:38:39.342062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.870 [2024-12-06 23:38:40.245190] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:28.870 request: 00:04:28.870 { 00:04:28.870 "trtype": "tcp", 00:04:28.870 "method": "nvmf_get_transports", 00:04:28.870 "req_id": 1 00:04:28.870 } 00:04:28.870 Got JSON-RPC error response 00:04:28.870 response: 00:04:28.870 { 00:04:28.870 "code": -19, 00:04:28.870 "message": "No such device" 00:04:28.870 } 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.870 [2024-12-06 23:38:40.257282] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.870 23:38:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.870 { 00:04:28.870 "subsystems": [ 00:04:28.870 { 00:04:28.870 "subsystem": "fsdev", 00:04:28.870 "config": [ 00:04:28.870 { 00:04:28.870 "method": "fsdev_set_opts", 00:04:28.870 "params": { 00:04:28.870 "fsdev_io_pool_size": 65535, 00:04:28.870 "fsdev_io_cache_size": 256 00:04:28.870 } 00:04:28.870 } 00:04:28.870 ] 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "subsystem": "keyring", 00:04:28.870 "config": [] 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "subsystem": "iobuf", 00:04:28.870 "config": [ 00:04:28.870 { 00:04:28.870 "method": "iobuf_set_options", 00:04:28.870 "params": { 00:04:28.870 "small_pool_count": 8192, 00:04:28.870 "large_pool_count": 1024, 00:04:28.870 "small_bufsize": 8192, 00:04:28.870 "large_bufsize": 135168, 00:04:28.870 "enable_numa": false 00:04:28.870 } 00:04:28.870 } 00:04:28.870 ] 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "subsystem": "sock", 00:04:28.870 "config": [ 00:04:28.870 { 00:04:28.870 "method": "sock_set_default_impl", 00:04:28.870 "params": { 00:04:28.870 "impl_name": "posix" 00:04:28.870 } 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "method": "sock_impl_set_options", 00:04:28.870 "params": { 00:04:28.870 "impl_name": "ssl", 00:04:28.870 "recv_buf_size": 4096, 00:04:28.870 "send_buf_size": 4096, 00:04:28.870 "enable_recv_pipe": true, 00:04:28.870 "enable_quickack": false, 00:04:28.870 "enable_placement_id": 0, 00:04:28.870 "enable_zerocopy_send_server": true, 00:04:28.870 "enable_zerocopy_send_client": false, 00:04:28.870 "zerocopy_threshold": 0, 00:04:28.870 "tls_version": 0, 00:04:28.870 "enable_ktls": false 00:04:28.870 } 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "method": "sock_impl_set_options", 00:04:28.870 "params": { 00:04:28.870 "impl_name": "posix", 00:04:28.870 "recv_buf_size": 2097152, 00:04:28.870 "send_buf_size": 2097152, 00:04:28.870 "enable_recv_pipe": true, 00:04:28.870 "enable_quickack": false, 00:04:28.870 "enable_placement_id": 0, 00:04:28.870 "enable_zerocopy_send_server": true, 00:04:28.870 "enable_zerocopy_send_client": false, 00:04:28.870 "zerocopy_threshold": 0, 00:04:28.870 "tls_version": 0, 00:04:28.870 "enable_ktls": false 00:04:28.870 } 00:04:28.870 } 00:04:28.870 ] 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "subsystem": "vmd", 00:04:28.870 "config": [] 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "subsystem": "accel", 00:04:28.870 "config": [ 00:04:28.870 { 00:04:28.870 "method": "accel_set_options", 00:04:28.870 "params": { 00:04:28.870 "small_cache_size": 128, 00:04:28.870 "large_cache_size": 16, 00:04:28.870 "task_count": 2048, 00:04:28.870 "sequence_count": 2048, 00:04:28.870 "buf_count": 2048 00:04:28.870 } 00:04:28.870 } 00:04:28.870 ] 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "subsystem": "bdev", 00:04:28.870 "config": [ 00:04:28.870 { 00:04:28.870 "method": "bdev_set_options", 00:04:28.870 "params": { 00:04:28.870 "bdev_io_pool_size": 65535, 00:04:28.870 "bdev_io_cache_size": 256, 00:04:28.870 "bdev_auto_examine": true, 00:04:28.870 "iobuf_small_cache_size": 128, 00:04:28.870 "iobuf_large_cache_size": 16 00:04:28.870 } 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "method": "bdev_raid_set_options", 00:04:28.870 "params": { 00:04:28.870 "process_window_size_kb": 1024, 00:04:28.870 "process_max_bandwidth_mb_sec": 0 00:04:28.870 } 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "method": "bdev_iscsi_set_options", 00:04:28.870 "params": { 00:04:28.870 "timeout_sec": 30 00:04:28.870 } 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "method": "bdev_nvme_set_options", 00:04:28.870 "params": { 00:04:28.870 "action_on_timeout": "none", 00:04:28.870 "timeout_us": 0, 00:04:28.870 "timeout_admin_us": 0, 00:04:28.870 "keep_alive_timeout_ms": 10000, 00:04:28.870 "arbitration_burst": 0, 00:04:28.870 "low_priority_weight": 0, 00:04:28.870 "medium_priority_weight": 0, 00:04:28.870 "high_priority_weight": 0, 00:04:28.870 "nvme_adminq_poll_period_us": 10000, 00:04:28.870 "nvme_ioq_poll_period_us": 0, 00:04:28.870 "io_queue_requests": 0, 00:04:28.870 "delay_cmd_submit": true, 00:04:28.870 "transport_retry_count": 4, 00:04:28.870 "bdev_retry_count": 3, 00:04:28.870 "transport_ack_timeout": 0, 00:04:28.870 "ctrlr_loss_timeout_sec": 0, 00:04:28.870 "reconnect_delay_sec": 0, 00:04:28.870 "fast_io_fail_timeout_sec": 0, 00:04:28.870 "disable_auto_failback": false, 00:04:28.870 "generate_uuids": false, 00:04:28.870 "transport_tos": 0, 00:04:28.870 "nvme_error_stat": false, 00:04:28.870 "rdma_srq_size": 0, 00:04:28.870 "io_path_stat": false, 00:04:28.870 "allow_accel_sequence": false, 00:04:28.870 "rdma_max_cq_size": 0, 00:04:28.870 "rdma_cm_event_timeout_ms": 0, 00:04:28.870 "dhchap_digests": [ 00:04:28.870 "sha256", 00:04:28.870 "sha384", 00:04:28.870 "sha512" 00:04:28.870 ], 00:04:28.870 "dhchap_dhgroups": [ 00:04:28.870 "null", 00:04:28.870 "ffdhe2048", 00:04:28.870 "ffdhe3072", 00:04:28.870 "ffdhe4096", 00:04:28.870 "ffdhe6144", 00:04:28.870 "ffdhe8192" 00:04:28.870 ] 00:04:28.870 } 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "method": "bdev_nvme_set_hotplug", 00:04:28.870 "params": { 00:04:28.870 "period_us": 100000, 00:04:28.870 "enable": false 00:04:28.870 } 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "method": "bdev_wait_for_examine" 00:04:28.870 } 00:04:28.870 ] 00:04:28.870 }, 00:04:28.870 { 00:04:28.870 "subsystem": "scsi", 00:04:28.871 "config": null 00:04:28.871 }, 00:04:28.871 { 00:04:28.871 "subsystem": "scheduler", 00:04:28.871 "config": [ 00:04:28.871 { 00:04:28.871 "method": "framework_set_scheduler", 00:04:28.871 "params": { 00:04:28.871 "name": "static" 00:04:28.871 } 00:04:28.871 } 00:04:28.871 ] 00:04:28.871 }, 00:04:28.871 { 00:04:28.871 "subsystem": "vhost_scsi", 00:04:28.871 "config": [] 00:04:28.871 }, 00:04:28.871 { 00:04:28.871 "subsystem": "vhost_blk", 00:04:28.871 "config": [] 00:04:28.871 }, 00:04:28.871 { 00:04:28.871 "subsystem": "ublk", 00:04:28.871 "config": [] 00:04:28.871 }, 00:04:28.871 { 00:04:28.871 "subsystem": "nbd", 00:04:28.871 "config": [] 00:04:28.871 }, 00:04:28.871 { 00:04:28.871 "subsystem": "nvmf", 00:04:28.871 "config": [ 00:04:28.871 { 00:04:28.871 "method": "nvmf_set_config", 00:04:28.871 "params": { 00:04:28.871 "discovery_filter": "match_any", 00:04:28.871 "admin_cmd_passthru": { 00:04:28.871 "identify_ctrlr": false 00:04:28.871 }, 00:04:28.871 "dhchap_digests": [ 00:04:28.871 "sha256", 00:04:28.871 "sha384", 00:04:28.871 "sha512" 00:04:28.871 ], 00:04:28.871 "dhchap_dhgroups": [ 00:04:28.871 "null", 00:04:28.871 "ffdhe2048", 00:04:28.871 "ffdhe3072", 00:04:28.871 "ffdhe4096", 00:04:28.871 "ffdhe6144", 00:04:28.871 "ffdhe8192" 00:04:28.871 ] 00:04:28.871 } 00:04:28.871 }, 00:04:28.871 { 00:04:28.871 "method": "nvmf_set_max_subsystems", 00:04:28.871 "params": { 00:04:28.871 "max_subsystems": 1024 00:04:28.871 } 00:04:28.871 }, 00:04:28.871 { 00:04:28.871 "method": "nvmf_set_crdt", 00:04:28.871 "params": { 00:04:28.871 "crdt1": 0, 00:04:28.871 "crdt2": 0, 00:04:28.871 "crdt3": 0 00:04:28.871 } 00:04:28.871 }, 00:04:28.871 { 00:04:28.871 "method": "nvmf_create_transport", 00:04:28.871 "params": { 00:04:28.871 "trtype": "TCP", 00:04:28.871 "max_queue_depth": 128, 00:04:28.871 "max_io_qpairs_per_ctrlr": 127, 00:04:28.871 "in_capsule_data_size": 4096, 00:04:28.871 "max_io_size": 131072, 00:04:28.871 "io_unit_size": 131072, 00:04:28.871 "max_aq_depth": 128, 00:04:28.871 "num_shared_buffers": 511, 00:04:28.871 "buf_cache_size": 4294967295, 00:04:28.871 "dif_insert_or_strip": false, 00:04:28.871 "zcopy": false, 00:04:28.871 "c2h_success": true, 00:04:28.871 "sock_priority": 0, 00:04:28.871 "abort_timeout_sec": 1, 00:04:28.871 "ack_timeout": 0, 00:04:28.871 "data_wr_pool_size": 0 00:04:28.871 } 00:04:28.871 } 00:04:28.871 ] 00:04:28.871 }, 00:04:28.871 { 00:04:28.871 "subsystem": "iscsi", 00:04:28.871 "config": [ 00:04:28.871 { 00:04:28.871 "method": "iscsi_set_options", 00:04:28.871 "params": { 00:04:28.871 "node_base": "iqn.2016-06.io.spdk", 00:04:28.871 "max_sessions": 128, 00:04:28.871 "max_connections_per_session": 2, 00:04:28.871 "max_queue_depth": 64, 00:04:28.871 "default_time2wait": 2, 00:04:28.871 "default_time2retain": 20, 00:04:28.871 "first_burst_length": 8192, 00:04:28.871 "immediate_data": true, 00:04:28.871 "allow_duplicated_isid": false, 00:04:28.871 "error_recovery_level": 0, 00:04:28.871 "nop_timeout": 60, 00:04:28.871 "nop_in_interval": 30, 00:04:28.871 "disable_chap": false, 00:04:28.871 "require_chap": false, 00:04:28.871 "mutual_chap": false, 00:04:28.871 "chap_group": 0, 00:04:28.871 "max_large_datain_per_connection": 64, 00:04:28.871 "max_r2t_per_connection": 4, 00:04:28.871 "pdu_pool_size": 36864, 00:04:28.871 "immediate_data_pool_size": 16384, 00:04:28.871 "data_out_pool_size": 2048 00:04:28.871 } 00:04:28.871 } 00:04:28.871 ] 00:04:28.871 } 00:04:28.871 ] 00:04:28.871 } 00:04:28.871 23:38:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:28.871 23:38:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57232 00:04:28.871 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57232 ']' 00:04:28.871 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57232 00:04:29.130 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:29.130 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.130 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57232 00:04:29.130 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.130 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.130 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57232' 00:04:29.130 killing process with pid 57232 00:04:29.130 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57232 00:04:29.130 23:38:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57232 00:04:31.669 23:38:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57288 00:04:31.669 23:38:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.669 23:38:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:36.964 23:38:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57288 00:04:36.964 23:38:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57288 ']' 00:04:36.964 23:38:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57288 00:04:36.964 23:38:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:36.964 23:38:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.964 23:38:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57288 00:04:36.964 23:38:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.964 23:38:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.964 killing process with pid 57288 00:04:36.964 23:38:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57288' 00:04:36.964 23:38:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57288 00:04:36.964 23:38:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57288 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.871 00:04:38.871 real 0m11.375s 00:04:38.871 user 0m10.854s 00:04:38.871 sys 0m0.819s 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.871 ************************************ 00:04:38.871 END TEST skip_rpc_with_json 00:04:38.871 ************************************ 00:04:38.871 23:38:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:38.871 23:38:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.871 23:38:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.871 23:38:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.871 ************************************ 00:04:38.871 START TEST skip_rpc_with_delay 00:04:38.871 ************************************ 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:38.871 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.131 [2024-12-06 23:38:50.496078] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:39.131 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:39.131 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.131 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.131 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.131 00:04:39.131 real 0m0.169s 00:04:39.131 user 0m0.091s 00:04:39.131 sys 0m0.076s 00:04:39.131 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.131 23:38:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:39.131 ************************************ 00:04:39.131 END TEST skip_rpc_with_delay 00:04:39.131 ************************************ 00:04:39.131 23:38:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:39.131 23:38:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:39.131 23:38:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:39.131 23:38:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.131 23:38:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.131 23:38:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.131 ************************************ 00:04:39.131 START TEST exit_on_failed_rpc_init 00:04:39.131 ************************************ 00:04:39.131 23:38:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:39.131 23:38:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57416 00:04:39.131 23:38:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.131 23:38:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57416 00:04:39.131 23:38:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57416 ']' 00:04:39.131 23:38:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.131 23:38:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.131 23:38:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.131 23:38:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.131 23:38:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.390 [2024-12-06 23:38:50.721908] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:04:39.390 [2024-12-06 23:38:50.722042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57416 ] 00:04:39.390 [2024-12-06 23:38:50.891038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.648 [2024-12-06 23:38:51.017740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.581 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:40.582 23:38:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.582 [2024-12-06 23:38:51.981727] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:04:40.582 [2024-12-06 23:38:51.982313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57440 ] 00:04:40.843 [2024-12-06 23:38:52.162413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.843 [2024-12-06 23:38:52.276571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.843 [2024-12-06 23:38:52.276667] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:40.843 [2024-12-06 23:38:52.276682] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:40.843 [2024-12-06 23:38:52.276694] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57416 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57416 ']' 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57416 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57416 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.105 killing process with pid 57416 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57416' 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57416 00:04:41.105 23:38:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57416 00:04:43.641 00:04:43.641 real 0m4.374s 00:04:43.641 user 0m4.702s 00:04:43.641 sys 0m0.579s 00:04:43.641 23:38:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.641 23:38:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.641 ************************************ 00:04:43.641 END TEST exit_on_failed_rpc_init 00:04:43.641 ************************************ 00:04:43.641 23:38:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.641 00:04:43.641 real 0m23.898s 00:04:43.641 user 0m22.860s 00:04:43.641 sys 0m2.187s 00:04:43.641 23:38:55 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.641 23:38:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.641 ************************************ 00:04:43.641 END TEST skip_rpc 00:04:43.641 ************************************ 00:04:43.641 23:38:55 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:43.641 23:38:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.641 23:38:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.641 23:38:55 -- common/autotest_common.sh@10 -- # set +x 00:04:43.641 ************************************ 00:04:43.641 START TEST rpc_client 00:04:43.641 ************************************ 00:04:43.641 23:38:55 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:43.900 * Looking for test storage... 00:04:43.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:43.900 23:38:55 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.900 23:38:55 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.900 23:38:55 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.900 23:38:55 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.900 23:38:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:43.901 23:38:55 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.901 23:38:55 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.901 --rc genhtml_branch_coverage=1 00:04:43.901 --rc genhtml_function_coverage=1 00:04:43.901 --rc genhtml_legend=1 00:04:43.901 --rc geninfo_all_blocks=1 00:04:43.901 --rc geninfo_unexecuted_blocks=1 00:04:43.901 00:04:43.901 ' 00:04:43.901 23:38:55 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.901 --rc genhtml_branch_coverage=1 00:04:43.901 --rc genhtml_function_coverage=1 00:04:43.901 --rc genhtml_legend=1 00:04:43.901 --rc geninfo_all_blocks=1 00:04:43.901 --rc geninfo_unexecuted_blocks=1 00:04:43.901 00:04:43.901 ' 00:04:43.901 23:38:55 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.901 --rc genhtml_branch_coverage=1 00:04:43.901 --rc genhtml_function_coverage=1 00:04:43.901 --rc genhtml_legend=1 00:04:43.901 --rc geninfo_all_blocks=1 00:04:43.901 --rc geninfo_unexecuted_blocks=1 00:04:43.901 00:04:43.901 ' 00:04:43.901 23:38:55 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.901 --rc genhtml_branch_coverage=1 00:04:43.901 --rc genhtml_function_coverage=1 00:04:43.901 --rc genhtml_legend=1 00:04:43.901 --rc geninfo_all_blocks=1 00:04:43.901 --rc geninfo_unexecuted_blocks=1 00:04:43.901 00:04:43.901 ' 00:04:43.901 23:38:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:43.901 OK 00:04:43.901 23:38:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:43.901 00:04:43.901 real 0m0.303s 00:04:43.901 user 0m0.165s 00:04:43.901 sys 0m0.151s 00:04:43.901 23:38:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.901 23:38:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:43.901 ************************************ 00:04:43.901 END TEST rpc_client 00:04:43.901 ************************************ 00:04:44.160 23:38:55 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:44.160 23:38:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.160 23:38:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.160 23:38:55 -- common/autotest_common.sh@10 -- # set +x 00:04:44.160 ************************************ 00:04:44.160 START TEST json_config 00:04:44.160 ************************************ 00:04:44.160 23:38:55 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:44.160 23:38:55 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.160 23:38:55 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.160 23:38:55 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.160 23:38:55 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.160 23:38:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.160 23:38:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.160 23:38:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.160 23:38:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.160 23:38:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.160 23:38:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.160 23:38:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.160 23:38:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.160 23:38:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.160 23:38:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.160 23:38:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.160 23:38:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:44.160 23:38:55 json_config -- scripts/common.sh@345 -- # : 1 00:04:44.160 23:38:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.160 23:38:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.160 23:38:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:44.160 23:38:55 json_config -- scripts/common.sh@353 -- # local d=1 00:04:44.160 23:38:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.160 23:38:55 json_config -- scripts/common.sh@355 -- # echo 1 00:04:44.160 23:38:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.160 23:38:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:44.160 23:38:55 json_config -- scripts/common.sh@353 -- # local d=2 00:04:44.160 23:38:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.160 23:38:55 json_config -- scripts/common.sh@355 -- # echo 2 00:04:44.160 23:38:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.160 23:38:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.160 23:38:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.160 23:38:55 json_config -- scripts/common.sh@368 -- # return 0 00:04:44.160 23:38:55 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.160 23:38:55 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.160 --rc genhtml_branch_coverage=1 00:04:44.160 --rc genhtml_function_coverage=1 00:04:44.160 --rc genhtml_legend=1 00:04:44.160 --rc geninfo_all_blocks=1 00:04:44.160 --rc geninfo_unexecuted_blocks=1 00:04:44.160 00:04:44.160 ' 00:04:44.160 23:38:55 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.160 --rc genhtml_branch_coverage=1 00:04:44.160 --rc genhtml_function_coverage=1 00:04:44.160 --rc genhtml_legend=1 00:04:44.160 --rc geninfo_all_blocks=1 00:04:44.160 --rc geninfo_unexecuted_blocks=1 00:04:44.160 00:04:44.160 ' 00:04:44.160 23:38:55 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.160 --rc genhtml_branch_coverage=1 00:04:44.160 --rc genhtml_function_coverage=1 00:04:44.160 --rc genhtml_legend=1 00:04:44.160 --rc geninfo_all_blocks=1 00:04:44.160 --rc geninfo_unexecuted_blocks=1 00:04:44.160 00:04:44.160 ' 00:04:44.160 23:38:55 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.160 --rc genhtml_branch_coverage=1 00:04:44.160 --rc genhtml_function_coverage=1 00:04:44.160 --rc genhtml_legend=1 00:04:44.160 --rc geninfo_all_blocks=1 00:04:44.160 --rc geninfo_unexecuted_blocks=1 00:04:44.160 00:04:44.160 ' 00:04:44.160 23:38:55 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ab26a3d-4419-430f-b16d-7ba8ab10a33a 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9ab26a3d-4419-430f-b16d-7ba8ab10a33a 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.160 23:38:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.160 23:38:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.160 23:38:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.160 23:38:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.160 23:38:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.160 23:38:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.160 23:38:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.160 23:38:55 json_config -- paths/export.sh@5 -- # export PATH 00:04:44.160 23:38:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.160 23:38:55 json_config -- nvmf/common.sh@51 -- # : 0 00:04:44.161 23:38:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.161 23:38:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.161 23:38:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.161 23:38:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.161 23:38:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.161 23:38:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.161 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.161 23:38:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.161 23:38:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.161 23:38:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.161 23:38:55 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:44.161 23:38:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:44.161 23:38:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:44.161 23:38:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:44.161 23:38:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:44.161 23:38:55 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:44.161 WARNING: No tests are enabled so not running JSON configuration tests 00:04:44.161 23:38:55 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:44.161 00:04:44.161 real 0m0.225s 00:04:44.161 user 0m0.133s 00:04:44.161 sys 0m0.095s 00:04:44.161 23:38:55 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.161 23:38:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.161 ************************************ 00:04:44.161 END TEST json_config 00:04:44.161 ************************************ 00:04:44.419 23:38:55 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.419 23:38:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.419 23:38:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.419 23:38:55 -- common/autotest_common.sh@10 -- # set +x 00:04:44.419 ************************************ 00:04:44.419 START TEST json_config_extra_key 00:04:44.419 ************************************ 00:04:44.419 23:38:55 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.420 23:38:55 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.420 23:38:55 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.420 23:38:55 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.420 23:38:55 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:44.420 23:38:55 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.420 23:38:55 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.420 --rc genhtml_branch_coverage=1 00:04:44.420 --rc genhtml_function_coverage=1 00:04:44.420 --rc genhtml_legend=1 00:04:44.420 --rc geninfo_all_blocks=1 00:04:44.420 --rc geninfo_unexecuted_blocks=1 00:04:44.420 00:04:44.420 ' 00:04:44.420 23:38:55 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.420 --rc genhtml_branch_coverage=1 00:04:44.420 --rc genhtml_function_coverage=1 00:04:44.420 --rc genhtml_legend=1 00:04:44.420 --rc geninfo_all_blocks=1 00:04:44.420 --rc geninfo_unexecuted_blocks=1 00:04:44.420 00:04:44.420 ' 00:04:44.420 23:38:55 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.420 --rc genhtml_branch_coverage=1 00:04:44.420 --rc genhtml_function_coverage=1 00:04:44.420 --rc genhtml_legend=1 00:04:44.420 --rc geninfo_all_blocks=1 00:04:44.420 --rc geninfo_unexecuted_blocks=1 00:04:44.420 00:04:44.420 ' 00:04:44.420 23:38:55 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.420 --rc genhtml_branch_coverage=1 00:04:44.420 --rc genhtml_function_coverage=1 00:04:44.420 --rc genhtml_legend=1 00:04:44.420 --rc geninfo_all_blocks=1 00:04:44.420 --rc geninfo_unexecuted_blocks=1 00:04:44.420 00:04:44.420 ' 00:04:44.420 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ab26a3d-4419-430f-b16d-7ba8ab10a33a 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9ab26a3d-4419-430f-b16d-7ba8ab10a33a 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.420 23:38:55 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.420 23:38:55 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.679 23:38:55 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.679 23:38:55 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.679 23:38:55 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.679 23:38:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.679 23:38:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.679 23:38:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.679 23:38:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:44.679 23:38:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.679 23:38:55 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:44.679 23:38:55 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.679 23:38:55 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.679 23:38:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.679 23:38:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.679 23:38:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.679 23:38:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.679 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.679 23:38:55 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.679 23:38:55 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.679 23:38:55 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.679 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:44.679 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:44.679 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:44.679 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:44.679 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:44.679 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:44.679 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:44.679 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:44.679 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:44.679 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:44.679 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:44.679 INFO: launching applications... 00:04:44.679 23:38:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:44.679 23:38:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:44.680 23:38:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:44.680 23:38:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.680 Waiting for target to run... 00:04:44.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.680 23:38:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.680 23:38:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.680 23:38:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.680 23:38:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.680 23:38:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57650 00:04:44.680 23:38:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.680 23:38:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57650 /var/tmp/spdk_tgt.sock 00:04:44.680 23:38:55 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57650 ']' 00:04:44.680 23:38:55 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.680 23:38:55 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.680 23:38:55 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.680 23:38:55 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:44.680 23:38:55 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.680 23:38:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:44.680 [2024-12-06 23:38:56.104453] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:04:44.680 [2024-12-06 23:38:56.104640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57650 ] 00:04:45.249 [2024-12-06 23:38:56.501964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.249 [2024-12-06 23:38:56.604311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.818 00:04:45.818 INFO: shutting down applications... 00:04:45.818 23:38:57 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.818 23:38:57 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:45.818 23:38:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:45.818 23:38:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:45.818 23:38:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:45.818 23:38:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:45.818 23:38:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.818 23:38:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57650 ]] 00:04:45.818 23:38:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57650 00:04:45.818 23:38:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.819 23:38:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.819 23:38:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57650 00:04:45.819 23:38:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.388 23:38:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.388 23:38:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.388 23:38:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57650 00:04:46.388 23:38:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.957 23:38:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.957 23:38:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.957 23:38:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57650 00:04:46.957 23:38:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.558 23:38:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.558 23:38:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.558 23:38:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57650 00:04:47.558 23:38:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.817 23:38:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.817 23:38:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.817 23:38:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57650 00:04:47.817 23:38:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.386 23:38:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.386 23:38:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.386 23:38:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57650 00:04:48.386 23:38:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.955 23:39:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.955 23:39:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.955 23:39:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57650 00:04:48.955 SPDK target shutdown done 00:04:48.955 Success 00:04:48.955 23:39:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.955 23:39:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:48.955 23:39:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.955 23:39:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.955 23:39:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:48.955 00:04:48.955 real 0m4.607s 00:04:48.955 user 0m4.126s 00:04:48.955 sys 0m0.544s 00:04:48.955 23:39:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.955 23:39:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.955 ************************************ 00:04:48.955 END TEST json_config_extra_key 00:04:48.955 ************************************ 00:04:48.955 23:39:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.955 23:39:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.955 23:39:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.955 23:39:00 -- common/autotest_common.sh@10 -- # set +x 00:04:48.955 ************************************ 00:04:48.955 START TEST alias_rpc 00:04:48.955 ************************************ 00:04:48.955 23:39:00 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:49.215 * Looking for test storage... 00:04:49.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.215 23:39:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.215 --rc genhtml_branch_coverage=1 00:04:49.215 --rc genhtml_function_coverage=1 00:04:49.215 --rc genhtml_legend=1 00:04:49.215 --rc geninfo_all_blocks=1 00:04:49.215 --rc geninfo_unexecuted_blocks=1 00:04:49.215 00:04:49.215 ' 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.215 --rc genhtml_branch_coverage=1 00:04:49.215 --rc genhtml_function_coverage=1 00:04:49.215 --rc genhtml_legend=1 00:04:49.215 --rc geninfo_all_blocks=1 00:04:49.215 --rc geninfo_unexecuted_blocks=1 00:04:49.215 00:04:49.215 ' 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.215 --rc genhtml_branch_coverage=1 00:04:49.215 --rc genhtml_function_coverage=1 00:04:49.215 --rc genhtml_legend=1 00:04:49.215 --rc geninfo_all_blocks=1 00:04:49.215 --rc geninfo_unexecuted_blocks=1 00:04:49.215 00:04:49.215 ' 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.215 --rc genhtml_branch_coverage=1 00:04:49.215 --rc genhtml_function_coverage=1 00:04:49.215 --rc genhtml_legend=1 00:04:49.215 --rc geninfo_all_blocks=1 00:04:49.215 --rc geninfo_unexecuted_blocks=1 00:04:49.215 00:04:49.215 ' 00:04:49.215 23:39:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:49.215 23:39:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.215 23:39:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57761 00:04:49.215 23:39:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57761 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57761 ']' 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.215 23:39:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.215 [2024-12-06 23:39:00.753996] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:04:49.215 [2024-12-06 23:39:00.754122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57761 ] 00:04:49.476 [2024-12-06 23:39:00.927970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.735 [2024-12-06 23:39:01.040443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.673 23:39:01 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.673 23:39:01 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:50.673 23:39:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:50.673 23:39:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57761 00:04:50.673 23:39:02 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57761 ']' 00:04:50.673 23:39:02 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57761 00:04:50.673 23:39:02 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:50.673 23:39:02 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.673 23:39:02 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57761 00:04:50.673 killing process with pid 57761 00:04:50.673 23:39:02 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.673 23:39:02 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.673 23:39:02 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57761' 00:04:50.673 23:39:02 alias_rpc -- common/autotest_common.sh@973 -- # kill 57761 00:04:50.673 23:39:02 alias_rpc -- common/autotest_common.sh@978 -- # wait 57761 00:04:53.210 ************************************ 00:04:53.210 END TEST alias_rpc 00:04:53.210 ************************************ 00:04:53.210 00:04:53.210 real 0m4.218s 00:04:53.210 user 0m4.200s 00:04:53.210 sys 0m0.584s 00:04:53.211 23:39:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.211 23:39:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.211 23:39:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:53.211 23:39:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:53.211 23:39:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.211 23:39:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.211 23:39:04 -- common/autotest_common.sh@10 -- # set +x 00:04:53.211 ************************************ 00:04:53.211 START TEST spdkcli_tcp 00:04:53.211 ************************************ 00:04:53.211 23:39:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:53.515 * Looking for test storage... 00:04:53.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:53.515 23:39:04 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.515 23:39:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.515 23:39:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.515 23:39:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.515 23:39:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:53.515 23:39:04 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.515 23:39:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.515 --rc genhtml_branch_coverage=1 00:04:53.515 --rc genhtml_function_coverage=1 00:04:53.515 --rc genhtml_legend=1 00:04:53.515 --rc geninfo_all_blocks=1 00:04:53.515 --rc geninfo_unexecuted_blocks=1 00:04:53.515 00:04:53.515 ' 00:04:53.515 23:39:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.515 --rc genhtml_branch_coverage=1 00:04:53.515 --rc genhtml_function_coverage=1 00:04:53.515 --rc genhtml_legend=1 00:04:53.515 --rc geninfo_all_blocks=1 00:04:53.515 --rc geninfo_unexecuted_blocks=1 00:04:53.515 00:04:53.515 ' 00:04:53.515 23:39:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.515 --rc genhtml_branch_coverage=1 00:04:53.515 --rc genhtml_function_coverage=1 00:04:53.515 --rc genhtml_legend=1 00:04:53.515 --rc geninfo_all_blocks=1 00:04:53.516 --rc geninfo_unexecuted_blocks=1 00:04:53.516 00:04:53.516 ' 00:04:53.516 23:39:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.516 --rc genhtml_branch_coverage=1 00:04:53.516 --rc genhtml_function_coverage=1 00:04:53.516 --rc genhtml_legend=1 00:04:53.516 --rc geninfo_all_blocks=1 00:04:53.516 --rc geninfo_unexecuted_blocks=1 00:04:53.516 00:04:53.516 ' 00:04:53.516 23:39:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:53.516 23:39:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:53.516 23:39:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:53.516 23:39:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:53.516 23:39:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:53.516 23:39:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:53.516 23:39:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:53.516 23:39:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.516 23:39:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.516 23:39:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57868 00:04:53.516 23:39:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:53.516 23:39:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57868 00:04:53.516 23:39:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57868 ']' 00:04:53.516 23:39:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.516 23:39:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.516 23:39:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.516 23:39:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.516 23:39:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.516 [2024-12-06 23:39:05.050315] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:04:53.516 [2024-12-06 23:39:05.050544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57868 ] 00:04:53.774 [2024-12-06 23:39:05.229516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.033 [2024-12-06 23:39:05.351879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.033 [2024-12-06 23:39:05.351919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.985 23:39:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.985 23:39:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:54.985 23:39:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57891 00:04:54.985 23:39:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:54.985 23:39:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:55.245 [ 00:04:55.245 "bdev_malloc_delete", 00:04:55.245 "bdev_malloc_create", 00:04:55.245 "bdev_null_resize", 00:04:55.245 "bdev_null_delete", 00:04:55.245 "bdev_null_create", 00:04:55.245 "bdev_nvme_cuse_unregister", 00:04:55.245 "bdev_nvme_cuse_register", 00:04:55.245 "bdev_opal_new_user", 00:04:55.245 "bdev_opal_set_lock_state", 00:04:55.245 "bdev_opal_delete", 00:04:55.245 "bdev_opal_get_info", 00:04:55.245 "bdev_opal_create", 00:04:55.245 "bdev_nvme_opal_revert", 00:04:55.245 "bdev_nvme_opal_init", 00:04:55.245 "bdev_nvme_send_cmd", 00:04:55.245 "bdev_nvme_set_keys", 00:04:55.245 "bdev_nvme_get_path_iostat", 00:04:55.245 "bdev_nvme_get_mdns_discovery_info", 00:04:55.245 "bdev_nvme_stop_mdns_discovery", 00:04:55.245 "bdev_nvme_start_mdns_discovery", 00:04:55.245 "bdev_nvme_set_multipath_policy", 00:04:55.245 "bdev_nvme_set_preferred_path", 00:04:55.245 "bdev_nvme_get_io_paths", 00:04:55.245 "bdev_nvme_remove_error_injection", 00:04:55.245 "bdev_nvme_add_error_injection", 00:04:55.245 "bdev_nvme_get_discovery_info", 00:04:55.245 "bdev_nvme_stop_discovery", 00:04:55.245 "bdev_nvme_start_discovery", 00:04:55.245 "bdev_nvme_get_controller_health_info", 00:04:55.245 "bdev_nvme_disable_controller", 00:04:55.245 "bdev_nvme_enable_controller", 00:04:55.245 "bdev_nvme_reset_controller", 00:04:55.245 "bdev_nvme_get_transport_statistics", 00:04:55.245 "bdev_nvme_apply_firmware", 00:04:55.245 "bdev_nvme_detach_controller", 00:04:55.245 "bdev_nvme_get_controllers", 00:04:55.245 "bdev_nvme_attach_controller", 00:04:55.245 "bdev_nvme_set_hotplug", 00:04:55.245 "bdev_nvme_set_options", 00:04:55.245 "bdev_passthru_delete", 00:04:55.245 "bdev_passthru_create", 00:04:55.245 "bdev_lvol_set_parent_bdev", 00:04:55.245 "bdev_lvol_set_parent", 00:04:55.245 "bdev_lvol_check_shallow_copy", 00:04:55.245 "bdev_lvol_start_shallow_copy", 00:04:55.245 "bdev_lvol_grow_lvstore", 00:04:55.245 "bdev_lvol_get_lvols", 00:04:55.245 "bdev_lvol_get_lvstores", 00:04:55.245 "bdev_lvol_delete", 00:04:55.245 "bdev_lvol_set_read_only", 00:04:55.245 "bdev_lvol_resize", 00:04:55.245 "bdev_lvol_decouple_parent", 00:04:55.245 "bdev_lvol_inflate", 00:04:55.245 "bdev_lvol_rename", 00:04:55.245 "bdev_lvol_clone_bdev", 00:04:55.245 "bdev_lvol_clone", 00:04:55.245 "bdev_lvol_snapshot", 00:04:55.245 "bdev_lvol_create", 00:04:55.245 "bdev_lvol_delete_lvstore", 00:04:55.245 "bdev_lvol_rename_lvstore", 00:04:55.245 "bdev_lvol_create_lvstore", 00:04:55.245 "bdev_raid_set_options", 00:04:55.245 "bdev_raid_remove_base_bdev", 00:04:55.246 "bdev_raid_add_base_bdev", 00:04:55.246 "bdev_raid_delete", 00:04:55.246 "bdev_raid_create", 00:04:55.246 "bdev_raid_get_bdevs", 00:04:55.246 "bdev_error_inject_error", 00:04:55.246 "bdev_error_delete", 00:04:55.246 "bdev_error_create", 00:04:55.246 "bdev_split_delete", 00:04:55.246 "bdev_split_create", 00:04:55.246 "bdev_delay_delete", 00:04:55.246 "bdev_delay_create", 00:04:55.246 "bdev_delay_update_latency", 00:04:55.246 "bdev_zone_block_delete", 00:04:55.246 "bdev_zone_block_create", 00:04:55.246 "blobfs_create", 00:04:55.246 "blobfs_detect", 00:04:55.246 "blobfs_set_cache_size", 00:04:55.246 "bdev_aio_delete", 00:04:55.246 "bdev_aio_rescan", 00:04:55.246 "bdev_aio_create", 00:04:55.246 "bdev_ftl_set_property", 00:04:55.246 "bdev_ftl_get_properties", 00:04:55.246 "bdev_ftl_get_stats", 00:04:55.246 "bdev_ftl_unmap", 00:04:55.246 "bdev_ftl_unload", 00:04:55.246 "bdev_ftl_delete", 00:04:55.246 "bdev_ftl_load", 00:04:55.246 "bdev_ftl_create", 00:04:55.246 "bdev_virtio_attach_controller", 00:04:55.246 "bdev_virtio_scsi_get_devices", 00:04:55.246 "bdev_virtio_detach_controller", 00:04:55.246 "bdev_virtio_blk_set_hotplug", 00:04:55.246 "bdev_iscsi_delete", 00:04:55.246 "bdev_iscsi_create", 00:04:55.246 "bdev_iscsi_set_options", 00:04:55.246 "accel_error_inject_error", 00:04:55.246 "ioat_scan_accel_module", 00:04:55.246 "dsa_scan_accel_module", 00:04:55.246 "iaa_scan_accel_module", 00:04:55.246 "keyring_file_remove_key", 00:04:55.246 "keyring_file_add_key", 00:04:55.246 "keyring_linux_set_options", 00:04:55.246 "fsdev_aio_delete", 00:04:55.246 "fsdev_aio_create", 00:04:55.246 "iscsi_get_histogram", 00:04:55.246 "iscsi_enable_histogram", 00:04:55.246 "iscsi_set_options", 00:04:55.246 "iscsi_get_auth_groups", 00:04:55.246 "iscsi_auth_group_remove_secret", 00:04:55.246 "iscsi_auth_group_add_secret", 00:04:55.246 "iscsi_delete_auth_group", 00:04:55.246 "iscsi_create_auth_group", 00:04:55.246 "iscsi_set_discovery_auth", 00:04:55.246 "iscsi_get_options", 00:04:55.246 "iscsi_target_node_request_logout", 00:04:55.246 "iscsi_target_node_set_redirect", 00:04:55.246 "iscsi_target_node_set_auth", 00:04:55.246 "iscsi_target_node_add_lun", 00:04:55.246 "iscsi_get_stats", 00:04:55.246 "iscsi_get_connections", 00:04:55.246 "iscsi_portal_group_set_auth", 00:04:55.246 "iscsi_start_portal_group", 00:04:55.246 "iscsi_delete_portal_group", 00:04:55.246 "iscsi_create_portal_group", 00:04:55.246 "iscsi_get_portal_groups", 00:04:55.246 "iscsi_delete_target_node", 00:04:55.246 "iscsi_target_node_remove_pg_ig_maps", 00:04:55.246 "iscsi_target_node_add_pg_ig_maps", 00:04:55.246 "iscsi_create_target_node", 00:04:55.246 "iscsi_get_target_nodes", 00:04:55.246 "iscsi_delete_initiator_group", 00:04:55.246 "iscsi_initiator_group_remove_initiators", 00:04:55.246 "iscsi_initiator_group_add_initiators", 00:04:55.246 "iscsi_create_initiator_group", 00:04:55.246 "iscsi_get_initiator_groups", 00:04:55.246 "nvmf_set_crdt", 00:04:55.246 "nvmf_set_config", 00:04:55.246 "nvmf_set_max_subsystems", 00:04:55.246 "nvmf_stop_mdns_prr", 00:04:55.246 "nvmf_publish_mdns_prr", 00:04:55.246 "nvmf_subsystem_get_listeners", 00:04:55.246 "nvmf_subsystem_get_qpairs", 00:04:55.246 "nvmf_subsystem_get_controllers", 00:04:55.246 "nvmf_get_stats", 00:04:55.246 "nvmf_get_transports", 00:04:55.246 "nvmf_create_transport", 00:04:55.246 "nvmf_get_targets", 00:04:55.246 "nvmf_delete_target", 00:04:55.246 "nvmf_create_target", 00:04:55.246 "nvmf_subsystem_allow_any_host", 00:04:55.246 "nvmf_subsystem_set_keys", 00:04:55.246 "nvmf_subsystem_remove_host", 00:04:55.246 "nvmf_subsystem_add_host", 00:04:55.246 "nvmf_ns_remove_host", 00:04:55.246 "nvmf_ns_add_host", 00:04:55.246 "nvmf_subsystem_remove_ns", 00:04:55.246 "nvmf_subsystem_set_ns_ana_group", 00:04:55.246 "nvmf_subsystem_add_ns", 00:04:55.246 "nvmf_subsystem_listener_set_ana_state", 00:04:55.246 "nvmf_discovery_get_referrals", 00:04:55.246 "nvmf_discovery_remove_referral", 00:04:55.246 "nvmf_discovery_add_referral", 00:04:55.246 "nvmf_subsystem_remove_listener", 00:04:55.246 "nvmf_subsystem_add_listener", 00:04:55.246 "nvmf_delete_subsystem", 00:04:55.246 "nvmf_create_subsystem", 00:04:55.246 "nvmf_get_subsystems", 00:04:55.246 "env_dpdk_get_mem_stats", 00:04:55.246 "nbd_get_disks", 00:04:55.246 "nbd_stop_disk", 00:04:55.246 "nbd_start_disk", 00:04:55.246 "ublk_recover_disk", 00:04:55.246 "ublk_get_disks", 00:04:55.246 "ublk_stop_disk", 00:04:55.246 "ublk_start_disk", 00:04:55.246 "ublk_destroy_target", 00:04:55.246 "ublk_create_target", 00:04:55.246 "virtio_blk_create_transport", 00:04:55.246 "virtio_blk_get_transports", 00:04:55.246 "vhost_controller_set_coalescing", 00:04:55.246 "vhost_get_controllers", 00:04:55.246 "vhost_delete_controller", 00:04:55.246 "vhost_create_blk_controller", 00:04:55.246 "vhost_scsi_controller_remove_target", 00:04:55.246 "vhost_scsi_controller_add_target", 00:04:55.246 "vhost_start_scsi_controller", 00:04:55.246 "vhost_create_scsi_controller", 00:04:55.246 "thread_set_cpumask", 00:04:55.246 "scheduler_set_options", 00:04:55.246 "framework_get_governor", 00:04:55.246 "framework_get_scheduler", 00:04:55.246 "framework_set_scheduler", 00:04:55.246 "framework_get_reactors", 00:04:55.246 "thread_get_io_channels", 00:04:55.246 "thread_get_pollers", 00:04:55.246 "thread_get_stats", 00:04:55.246 "framework_monitor_context_switch", 00:04:55.246 "spdk_kill_instance", 00:04:55.246 "log_enable_timestamps", 00:04:55.246 "log_get_flags", 00:04:55.246 "log_clear_flag", 00:04:55.246 "log_set_flag", 00:04:55.246 "log_get_level", 00:04:55.246 "log_set_level", 00:04:55.246 "log_get_print_level", 00:04:55.246 "log_set_print_level", 00:04:55.246 "framework_enable_cpumask_locks", 00:04:55.246 "framework_disable_cpumask_locks", 00:04:55.246 "framework_wait_init", 00:04:55.246 "framework_start_init", 00:04:55.246 "scsi_get_devices", 00:04:55.246 "bdev_get_histogram", 00:04:55.246 "bdev_enable_histogram", 00:04:55.246 "bdev_set_qos_limit", 00:04:55.246 "bdev_set_qd_sampling_period", 00:04:55.246 "bdev_get_bdevs", 00:04:55.246 "bdev_reset_iostat", 00:04:55.246 "bdev_get_iostat", 00:04:55.246 "bdev_examine", 00:04:55.246 "bdev_wait_for_examine", 00:04:55.246 "bdev_set_options", 00:04:55.246 "accel_get_stats", 00:04:55.246 "accel_set_options", 00:04:55.246 "accel_set_driver", 00:04:55.246 "accel_crypto_key_destroy", 00:04:55.246 "accel_crypto_keys_get", 00:04:55.246 "accel_crypto_key_create", 00:04:55.246 "accel_assign_opc", 00:04:55.246 "accel_get_module_info", 00:04:55.246 "accel_get_opc_assignments", 00:04:55.246 "vmd_rescan", 00:04:55.246 "vmd_remove_device", 00:04:55.246 "vmd_enable", 00:04:55.246 "sock_get_default_impl", 00:04:55.246 "sock_set_default_impl", 00:04:55.246 "sock_impl_set_options", 00:04:55.246 "sock_impl_get_options", 00:04:55.246 "iobuf_get_stats", 00:04:55.246 "iobuf_set_options", 00:04:55.246 "keyring_get_keys", 00:04:55.246 "framework_get_pci_devices", 00:04:55.246 "framework_get_config", 00:04:55.246 "framework_get_subsystems", 00:04:55.246 "fsdev_set_opts", 00:04:55.246 "fsdev_get_opts", 00:04:55.246 "trace_get_info", 00:04:55.246 "trace_get_tpoint_group_mask", 00:04:55.246 "trace_disable_tpoint_group", 00:04:55.246 "trace_enable_tpoint_group", 00:04:55.246 "trace_clear_tpoint_mask", 00:04:55.246 "trace_set_tpoint_mask", 00:04:55.246 "notify_get_notifications", 00:04:55.246 "notify_get_types", 00:04:55.246 "spdk_get_version", 00:04:55.246 "rpc_get_methods" 00:04:55.246 ] 00:04:55.246 23:39:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:55.246 23:39:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.246 23:39:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.246 23:39:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:55.246 23:39:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57868 00:04:55.246 23:39:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57868 ']' 00:04:55.246 23:39:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57868 00:04:55.246 23:39:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:55.246 23:39:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.246 23:39:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57868 00:04:55.246 killing process with pid 57868 00:04:55.246 23:39:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.246 23:39:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.246 23:39:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57868' 00:04:55.246 23:39:06 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57868 00:04:55.246 23:39:06 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57868 00:04:58.542 ************************************ 00:04:58.542 END TEST spdkcli_tcp 00:04:58.542 ************************************ 00:04:58.542 00:04:58.542 real 0m4.631s 00:04:58.542 user 0m8.197s 00:04:58.542 sys 0m0.753s 00:04:58.542 23:39:09 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.542 23:39:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.542 23:39:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:58.542 23:39:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.542 23:39:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.542 23:39:09 -- common/autotest_common.sh@10 -- # set +x 00:04:58.542 ************************************ 00:04:58.542 START TEST dpdk_mem_utility 00:04:58.542 ************************************ 00:04:58.542 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:58.542 * Looking for test storage... 00:04:58.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:58.542 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.542 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.542 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.542 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.542 23:39:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:58.542 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.542 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.542 --rc genhtml_branch_coverage=1 00:04:58.543 --rc genhtml_function_coverage=1 00:04:58.543 --rc genhtml_legend=1 00:04:58.543 --rc geninfo_all_blocks=1 00:04:58.543 --rc geninfo_unexecuted_blocks=1 00:04:58.543 00:04:58.543 ' 00:04:58.543 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.543 --rc genhtml_branch_coverage=1 00:04:58.543 --rc genhtml_function_coverage=1 00:04:58.543 --rc genhtml_legend=1 00:04:58.543 --rc geninfo_all_blocks=1 00:04:58.543 --rc geninfo_unexecuted_blocks=1 00:04:58.543 00:04:58.543 ' 00:04:58.543 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.543 --rc genhtml_branch_coverage=1 00:04:58.543 --rc genhtml_function_coverage=1 00:04:58.543 --rc genhtml_legend=1 00:04:58.543 --rc geninfo_all_blocks=1 00:04:58.543 --rc geninfo_unexecuted_blocks=1 00:04:58.543 00:04:58.543 ' 00:04:58.543 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.543 --rc genhtml_branch_coverage=1 00:04:58.543 --rc genhtml_function_coverage=1 00:04:58.543 --rc genhtml_legend=1 00:04:58.543 --rc geninfo_all_blocks=1 00:04:58.543 --rc geninfo_unexecuted_blocks=1 00:04:58.543 00:04:58.543 ' 00:04:58.543 23:39:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:58.543 23:39:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57997 00:04:58.543 23:39:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.543 23:39:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57997 00:04:58.543 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57997 ']' 00:04:58.543 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.543 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.543 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.543 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.543 23:39:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.543 [2024-12-06 23:39:09.742767] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:04:58.543 [2024-12-06 23:39:09.742954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57997 ] 00:04:58.543 [2024-12-06 23:39:09.914482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.543 [2024-12-06 23:39:10.061550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.946 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.946 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:59.947 23:39:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:59.947 23:39:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:59.947 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.947 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.947 { 00:04:59.947 "filename": "/tmp/spdk_mem_dump.txt" 00:04:59.947 } 00:04:59.947 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.947 23:39:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.947 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:59.947 1 heaps totaling size 824.000000 MiB 00:04:59.947 size: 824.000000 MiB heap id: 0 00:04:59.947 end heaps---------- 00:04:59.947 9 mempools totaling size 603.782043 MiB 00:04:59.947 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:59.947 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:59.947 size: 100.555481 MiB name: bdev_io_57997 00:04:59.947 size: 50.003479 MiB name: msgpool_57997 00:04:59.947 size: 36.509338 MiB name: fsdev_io_57997 00:04:59.947 size: 21.763794 MiB name: PDU_Pool 00:04:59.947 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:59.947 size: 4.133484 MiB name: evtpool_57997 00:04:59.947 size: 0.026123 MiB name: Session_Pool 00:04:59.947 end mempools------- 00:04:59.947 6 memzones totaling size 4.142822 MiB 00:04:59.947 size: 1.000366 MiB name: RG_ring_0_57997 00:04:59.947 size: 1.000366 MiB name: RG_ring_1_57997 00:04:59.947 size: 1.000366 MiB name: RG_ring_4_57997 00:04:59.947 size: 1.000366 MiB name: RG_ring_5_57997 00:04:59.947 size: 0.125366 MiB name: RG_ring_2_57997 00:04:59.947 size: 0.015991 MiB name: RG_ring_3_57997 00:04:59.947 end memzones------- 00:04:59.947 23:39:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:59.947 heap id: 0 total size: 824.000000 MiB number of busy elements: 315 number of free elements: 18 00:04:59.947 list of free elements. size: 16.781372 MiB 00:04:59.947 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:59.947 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:59.947 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:59.947 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:59.947 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:59.947 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:59.947 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:59.947 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:59.947 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:59.947 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:59.947 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:59.947 element at address: 0x20001b400000 with size: 0.562683 MiB 00:04:59.947 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:59.947 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:59.947 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:59.947 element at address: 0x200012c00000 with size: 0.433472 MiB 00:04:59.947 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:59.947 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:59.947 list of standard malloc elements. size: 199.287720 MiB 00:04:59.947 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:59.947 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:59.947 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:59.947 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:59.947 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:59.947 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:59.947 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:59.947 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:59.947 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:59.947 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:59.947 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:59.947 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:59.947 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:59.947 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:59.947 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:59.948 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:59.948 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:59.949 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:59.949 list of memzone associated elements. size: 607.930908 MiB 00:04:59.949 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:59.949 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:59.949 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:59.949 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:59.949 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:59.949 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57997_0 00:04:59.949 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:59.949 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57997_0 00:04:59.949 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:59.949 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57997_0 00:04:59.949 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:59.949 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:59.949 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:59.949 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:59.949 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:59.949 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57997_0 00:04:59.949 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:59.949 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57997 00:04:59.949 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:59.949 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57997 00:04:59.949 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:59.949 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:59.949 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:59.949 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:59.949 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:59.949 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:59.949 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:59.949 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:59.949 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:59.949 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57997 00:04:59.949 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:59.949 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57997 00:04:59.949 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:59.949 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57997 00:04:59.949 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:59.949 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57997 00:04:59.949 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:59.949 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57997 00:04:59.949 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:59.949 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57997 00:04:59.949 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:59.949 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:59.949 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:59.949 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:59.949 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:59.949 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:59.949 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:59.949 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57997 00:04:59.949 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:59.949 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57997 00:04:59.949 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:59.949 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:59.949 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:59.949 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:59.949 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:59.949 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57997 00:04:59.949 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:59.949 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:59.949 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:59.949 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57997 00:04:59.949 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:59.949 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57997 00:04:59.949 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:59.949 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57997 00:04:59.949 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:59.949 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:59.949 23:39:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:59.949 23:39:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57997 00:04:59.949 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57997 ']' 00:04:59.949 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57997 00:04:59.949 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:59.949 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.949 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57997 00:04:59.949 killing process with pid 57997 00:04:59.949 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.949 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.949 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57997' 00:04:59.949 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57997 00:04:59.949 23:39:11 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57997 00:05:02.488 ************************************ 00:05:02.488 END TEST dpdk_mem_utility 00:05:02.488 ************************************ 00:05:02.488 00:05:02.488 real 0m4.556s 00:05:02.488 user 0m4.278s 00:05:02.488 sys 0m0.731s 00:05:02.488 23:39:13 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.488 23:39:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.488 23:39:14 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:02.488 23:39:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.488 23:39:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.488 23:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:02.488 ************************************ 00:05:02.488 START TEST event 00:05:02.488 ************************************ 00:05:02.488 23:39:14 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:02.746 * Looking for test storage... 00:05:02.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:02.746 23:39:14 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.746 23:39:14 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.746 23:39:14 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.746 23:39:14 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.746 23:39:14 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.746 23:39:14 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.746 23:39:14 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.746 23:39:14 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.746 23:39:14 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.746 23:39:14 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.746 23:39:14 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.746 23:39:14 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.746 23:39:14 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.746 23:39:14 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.746 23:39:14 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.746 23:39:14 event -- scripts/common.sh@344 -- # case "$op" in 00:05:02.746 23:39:14 event -- scripts/common.sh@345 -- # : 1 00:05:02.746 23:39:14 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.746 23:39:14 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.746 23:39:14 event -- scripts/common.sh@365 -- # decimal 1 00:05:02.746 23:39:14 event -- scripts/common.sh@353 -- # local d=1 00:05:02.746 23:39:14 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.746 23:39:14 event -- scripts/common.sh@355 -- # echo 1 00:05:02.746 23:39:14 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.746 23:39:14 event -- scripts/common.sh@366 -- # decimal 2 00:05:02.746 23:39:14 event -- scripts/common.sh@353 -- # local d=2 00:05:02.746 23:39:14 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.746 23:39:14 event -- scripts/common.sh@355 -- # echo 2 00:05:02.746 23:39:14 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.746 23:39:14 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.746 23:39:14 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.746 23:39:14 event -- scripts/common.sh@368 -- # return 0 00:05:02.746 23:39:14 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.746 23:39:14 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.746 --rc genhtml_branch_coverage=1 00:05:02.746 --rc genhtml_function_coverage=1 00:05:02.746 --rc genhtml_legend=1 00:05:02.746 --rc geninfo_all_blocks=1 00:05:02.746 --rc geninfo_unexecuted_blocks=1 00:05:02.746 00:05:02.746 ' 00:05:02.746 23:39:14 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.746 --rc genhtml_branch_coverage=1 00:05:02.746 --rc genhtml_function_coverage=1 00:05:02.746 --rc genhtml_legend=1 00:05:02.746 --rc geninfo_all_blocks=1 00:05:02.746 --rc geninfo_unexecuted_blocks=1 00:05:02.746 00:05:02.746 ' 00:05:02.746 23:39:14 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.746 --rc genhtml_branch_coverage=1 00:05:02.746 --rc genhtml_function_coverage=1 00:05:02.746 --rc genhtml_legend=1 00:05:02.746 --rc geninfo_all_blocks=1 00:05:02.746 --rc geninfo_unexecuted_blocks=1 00:05:02.746 00:05:02.746 ' 00:05:02.746 23:39:14 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.746 --rc genhtml_branch_coverage=1 00:05:02.746 --rc genhtml_function_coverage=1 00:05:02.746 --rc genhtml_legend=1 00:05:02.746 --rc geninfo_all_blocks=1 00:05:02.746 --rc geninfo_unexecuted_blocks=1 00:05:02.746 00:05:02.746 ' 00:05:02.746 23:39:14 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:02.746 23:39:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:02.746 23:39:14 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.746 23:39:14 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:02.746 23:39:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.746 23:39:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.746 ************************************ 00:05:02.746 START TEST event_perf 00:05:02.746 ************************************ 00:05:02.746 23:39:14 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.004 Running I/O for 1 seconds...[2024-12-06 23:39:14.328110] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:05:03.004 [2024-12-06 23:39:14.328351] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58110 ] 00:05:03.004 [2024-12-06 23:39:14.522980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.263 [2024-12-06 23:39:14.676622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.264 [2024-12-06 23:39:14.676826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.264 [2024-12-06 23:39:14.676862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.264 Running I/O for 1 seconds...[2024-12-06 23:39:14.676883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.646 00:05:04.646 lcore 0: 92606 00:05:04.646 lcore 1: 92609 00:05:04.646 lcore 2: 92607 00:05:04.646 lcore 3: 92605 00:05:04.646 done. 00:05:04.646 00:05:04.646 real 0m1.660s 00:05:04.646 user 0m4.401s 00:05:04.646 sys 0m0.134s 00:05:04.646 23:39:15 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.646 23:39:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.646 ************************************ 00:05:04.646 END TEST event_perf 00:05:04.646 ************************************ 00:05:04.646 23:39:15 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:04.646 23:39:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:04.647 23:39:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.647 23:39:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.647 ************************************ 00:05:04.647 START TEST event_reactor 00:05:04.647 ************************************ 00:05:04.647 23:39:16 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:04.647 [2024-12-06 23:39:16.050036] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:05:04.647 [2024-12-06 23:39:16.050225] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58150 ] 00:05:04.904 [2024-12-06 23:39:16.226877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.904 [2024-12-06 23:39:16.368418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.294 test_start 00:05:06.294 oneshot 00:05:06.294 tick 100 00:05:06.294 tick 100 00:05:06.294 tick 250 00:05:06.294 tick 100 00:05:06.294 tick 100 00:05:06.294 tick 100 00:05:06.294 tick 250 00:05:06.294 tick 500 00:05:06.294 tick 100 00:05:06.294 tick 100 00:05:06.294 tick 250 00:05:06.294 tick 100 00:05:06.294 tick 100 00:05:06.294 test_end 00:05:06.294 00:05:06.294 real 0m1.613s 00:05:06.294 user 0m1.396s 00:05:06.294 sys 0m0.108s 00:05:06.294 ************************************ 00:05:06.294 END TEST event_reactor 00:05:06.294 ************************************ 00:05:06.294 23:39:17 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.294 23:39:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:06.294 23:39:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.294 23:39:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:06.294 23:39:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.294 23:39:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.294 ************************************ 00:05:06.294 START TEST event_reactor_perf 00:05:06.294 ************************************ 00:05:06.294 23:39:17 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.294 [2024-12-06 23:39:17.728391] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:05:06.294 [2024-12-06 23:39:17.728538] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58192 ] 00:05:06.554 [2024-12-06 23:39:17.900688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.554 [2024-12-06 23:39:18.043457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.936 test_start 00:05:07.936 test_end 00:05:07.936 Performance: 373025 events per second 00:05:07.936 ************************************ 00:05:07.936 END TEST event_reactor_perf 00:05:07.936 ************************************ 00:05:07.936 00:05:07.936 real 0m1.610s 00:05:07.936 user 0m1.395s 00:05:07.936 sys 0m0.106s 00:05:07.936 23:39:19 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.936 23:39:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.936 23:39:19 event -- event/event.sh@49 -- # uname -s 00:05:07.936 23:39:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:07.936 23:39:19 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:07.936 23:39:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.936 23:39:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.936 23:39:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.936 ************************************ 00:05:07.936 START TEST event_scheduler 00:05:07.936 ************************************ 00:05:07.936 23:39:19 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:07.936 * Looking for test storage... 00:05:07.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:07.936 23:39:19 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.936 23:39:19 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.936 23:39:19 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.194 23:39:19 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.194 23:39:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.195 23:39:19 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.195 23:39:19 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.195 23:39:19 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.195 --rc genhtml_branch_coverage=1 00:05:08.195 --rc genhtml_function_coverage=1 00:05:08.195 --rc genhtml_legend=1 00:05:08.195 --rc geninfo_all_blocks=1 00:05:08.195 --rc geninfo_unexecuted_blocks=1 00:05:08.195 00:05:08.195 ' 00:05:08.195 23:39:19 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.195 --rc genhtml_branch_coverage=1 00:05:08.195 --rc genhtml_function_coverage=1 00:05:08.195 --rc genhtml_legend=1 00:05:08.195 --rc geninfo_all_blocks=1 00:05:08.195 --rc geninfo_unexecuted_blocks=1 00:05:08.195 00:05:08.195 ' 00:05:08.195 23:39:19 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.195 --rc genhtml_branch_coverage=1 00:05:08.195 --rc genhtml_function_coverage=1 00:05:08.195 --rc genhtml_legend=1 00:05:08.195 --rc geninfo_all_blocks=1 00:05:08.195 --rc geninfo_unexecuted_blocks=1 00:05:08.195 00:05:08.195 ' 00:05:08.195 23:39:19 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.195 --rc genhtml_branch_coverage=1 00:05:08.195 --rc genhtml_function_coverage=1 00:05:08.195 --rc genhtml_legend=1 00:05:08.195 --rc geninfo_all_blocks=1 00:05:08.195 --rc geninfo_unexecuted_blocks=1 00:05:08.195 00:05:08.195 ' 00:05:08.195 23:39:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.195 23:39:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58262 00:05:08.195 23:39:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.195 23:39:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.195 23:39:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58262 00:05:08.195 23:39:19 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58262 ']' 00:05:08.195 23:39:19 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.195 23:39:19 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.195 23:39:19 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.195 23:39:19 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.195 23:39:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.195 [2024-12-06 23:39:19.648898] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:05:08.195 [2024-12-06 23:39:19.649085] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58262 ] 00:05:08.454 [2024-12-06 23:39:19.819690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.454 [2024-12-06 23:39:19.937140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.454 [2024-12-06 23:39:19.937512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.454 [2024-12-06 23:39:19.937334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.454 [2024-12-06 23:39:19.937542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.023 23:39:20 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.023 23:39:20 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:09.023 23:39:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.023 23:39:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.023 23:39:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.023 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.023 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.023 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.023 POWER: Cannot set governor of lcore 0 to performance 00:05:09.023 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.023 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.023 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.023 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.023 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:09.023 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:09.023 POWER: Unable to set Power Management Environment for lcore 0 00:05:09.023 [2024-12-06 23:39:20.514643] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:09.023 [2024-12-06 23:39:20.514708] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:09.023 [2024-12-06 23:39:20.514734] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.023 [2024-12-06 23:39:20.514774] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.023 [2024-12-06 23:39:20.514806] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.023 [2024-12-06 23:39:20.514829] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.023 23:39:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.023 23:39:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.023 23:39:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.023 23:39:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.282 [2024-12-06 23:39:20.836195] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:09.282 23:39:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.282 23:39:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:09.282 23:39:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.282 23:39:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.282 23:39:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 ************************************ 00:05:09.542 START TEST scheduler_create_thread 00:05:09.542 ************************************ 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 2 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 3 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 4 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 5 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 6 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 7 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 8 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 9 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 10 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.542 23:39:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.479 ************************************ 00:05:10.479 END TEST scheduler_create_thread 00:05:10.479 ************************************ 00:05:10.479 23:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.479 00:05:10.479 real 0m1.175s 00:05:10.480 user 0m0.015s 00:05:10.480 sys 0m0.004s 00:05:10.480 23:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.480 23:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.739 23:39:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:10.739 23:39:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58262 00:05:10.739 23:39:22 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58262 ']' 00:05:10.739 23:39:22 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58262 00:05:10.739 23:39:22 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:10.739 23:39:22 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.739 23:39:22 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58262 00:05:10.739 killing process with pid 58262 00:05:10.739 23:39:22 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:10.739 23:39:22 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:10.739 23:39:22 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58262' 00:05:10.739 23:39:22 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58262 00:05:10.739 23:39:22 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58262 00:05:10.998 [2024-12-06 23:39:22.500359] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:12.373 ************************************ 00:05:12.373 END TEST event_scheduler 00:05:12.373 ************************************ 00:05:12.373 00:05:12.373 real 0m4.298s 00:05:12.373 user 0m7.366s 00:05:12.373 sys 0m0.500s 00:05:12.373 23:39:23 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.373 23:39:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.373 23:39:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:12.373 23:39:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:12.373 23:39:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.374 23:39:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.374 23:39:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.374 ************************************ 00:05:12.374 START TEST app_repeat 00:05:12.374 ************************************ 00:05:12.374 23:39:23 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58357 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58357' 00:05:12.374 Process app_repeat pid: 58357 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.374 spdk_app_start Round 0 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:12.374 23:39:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58357 /var/tmp/spdk-nbd.sock 00:05:12.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.374 23:39:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58357 ']' 00:05:12.374 23:39:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.374 23:39:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.374 23:39:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.374 23:39:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.374 23:39:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.374 [2024-12-06 23:39:23.795810] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:05:12.374 [2024-12-06 23:39:23.795911] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58357 ] 00:05:12.633 [2024-12-06 23:39:23.954227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.633 [2024-12-06 23:39:24.097885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.633 [2024-12-06 23:39:24.097935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.202 23:39:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.202 23:39:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:13.202 23:39:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.462 Malloc0 00:05:13.462 23:39:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.729 Malloc1 00:05:13.991 23:39:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.991 /dev/nbd0 00:05:13.991 23:39:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.249 23:39:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.249 1+0 records in 00:05:14.249 1+0 records out 00:05:14.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299443 s, 13.7 MB/s 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.249 23:39:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.249 23:39:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.249 23:39:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.249 23:39:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.249 /dev/nbd1 00:05:14.507 23:39:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.507 23:39:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.507 23:39:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:14.507 23:39:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.507 23:39:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.507 23:39:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.507 23:39:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:14.507 23:39:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.507 23:39:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.507 23:39:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.507 23:39:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.507 1+0 records in 00:05:14.507 1+0 records out 00:05:14.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056817 s, 7.2 MB/s 00:05:14.507 23:39:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.507 23:39:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.507 23:39:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.508 23:39:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.508 23:39:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.508 23:39:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.508 23:39:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.508 23:39:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.508 23:39:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.508 23:39:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.508 23:39:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.508 { 00:05:14.508 "nbd_device": "/dev/nbd0", 00:05:14.508 "bdev_name": "Malloc0" 00:05:14.508 }, 00:05:14.508 { 00:05:14.508 "nbd_device": "/dev/nbd1", 00:05:14.508 "bdev_name": "Malloc1" 00:05:14.508 } 00:05:14.508 ]' 00:05:14.508 23:39:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.508 { 00:05:14.508 "nbd_device": "/dev/nbd0", 00:05:14.508 "bdev_name": "Malloc0" 00:05:14.508 }, 00:05:14.508 { 00:05:14.508 "nbd_device": "/dev/nbd1", 00:05:14.508 "bdev_name": "Malloc1" 00:05:14.508 } 00:05:14.508 ]' 00:05:14.508 23:39:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.766 /dev/nbd1' 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.766 /dev/nbd1' 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.766 256+0 records in 00:05:14.766 256+0 records out 00:05:14.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148585 s, 70.6 MB/s 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.766 256+0 records in 00:05:14.766 256+0 records out 00:05:14.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023474 s, 44.7 MB/s 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.766 256+0 records in 00:05:14.766 256+0 records out 00:05:14.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272674 s, 38.5 MB/s 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.766 23:39:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.767 23:39:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.767 23:39:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.767 23:39:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.767 23:39:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.767 23:39:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.767 23:39:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.767 23:39:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.767 23:39:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.767 23:39:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.767 23:39:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.025 23:39:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.025 23:39:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.025 23:39:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.025 23:39:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.025 23:39:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.025 23:39:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.025 23:39:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.025 23:39:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.025 23:39:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.025 23:39:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.291 23:39:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.562 23:39:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.562 23:39:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.562 23:39:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.562 23:39:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.562 23:39:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.562 23:39:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.562 23:39:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.562 23:39:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.562 23:39:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.562 23:39:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.822 23:39:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.203 [2024-12-06 23:39:28.417027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.203 [2024-12-06 23:39:28.528600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.203 [2024-12-06 23:39:28.528601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.203 [2024-12-06 23:39:28.720522] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.203 [2024-12-06 23:39:28.720635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.111 spdk_app_start Round 1 00:05:19.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.111 23:39:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.111 23:39:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:19.111 23:39:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58357 /var/tmp/spdk-nbd.sock 00:05:19.111 23:39:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58357 ']' 00:05:19.111 23:39:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.111 23:39:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.111 23:39:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.111 23:39:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.112 23:39:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.112 23:39:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.112 23:39:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.112 23:39:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.370 Malloc0 00:05:19.370 23:39:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.629 Malloc1 00:05:19.629 23:39:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.629 23:39:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.888 /dev/nbd0 00:05:19.888 23:39:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.888 23:39:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.888 1+0 records in 00:05:19.888 1+0 records out 00:05:19.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546223 s, 7.5 MB/s 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:19.888 23:39:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:19.888 23:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.888 23:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.888 23:39:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.146 /dev/nbd1 00:05:20.146 23:39:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.146 23:39:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.146 1+0 records in 00:05:20.146 1+0 records out 00:05:20.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509693 s, 8.0 MB/s 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.146 23:39:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.146 23:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.146 23:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.146 23:39:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.146 23:39:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.146 23:39:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.404 { 00:05:20.404 "nbd_device": "/dev/nbd0", 00:05:20.404 "bdev_name": "Malloc0" 00:05:20.404 }, 00:05:20.404 { 00:05:20.404 "nbd_device": "/dev/nbd1", 00:05:20.404 "bdev_name": "Malloc1" 00:05:20.404 } 00:05:20.404 ]' 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.404 { 00:05:20.404 "nbd_device": "/dev/nbd0", 00:05:20.404 "bdev_name": "Malloc0" 00:05:20.404 }, 00:05:20.404 { 00:05:20.404 "nbd_device": "/dev/nbd1", 00:05:20.404 "bdev_name": "Malloc1" 00:05:20.404 } 00:05:20.404 ]' 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.404 /dev/nbd1' 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.404 /dev/nbd1' 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.404 256+0 records in 00:05:20.404 256+0 records out 00:05:20.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146173 s, 71.7 MB/s 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.404 256+0 records in 00:05:20.404 256+0 records out 00:05:20.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237351 s, 44.2 MB/s 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.404 256+0 records in 00:05:20.404 256+0 records out 00:05:20.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289395 s, 36.2 MB/s 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.404 23:39:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.405 23:39:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.405 23:39:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.405 23:39:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.405 23:39:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.405 23:39:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.405 23:39:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.405 23:39:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.405 23:39:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.405 23:39:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.664 23:39:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.664 23:39:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.664 23:39:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.664 23:39:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.664 23:39:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.664 23:39:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.664 23:39:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.664 23:39:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.664 23:39:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.664 23:39:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.923 23:39:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.923 23:39:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.923 23:39:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.923 23:39:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.923 23:39:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.923 23:39:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.923 23:39:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.923 23:39:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.923 23:39:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.923 23:39:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.923 23:39:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.181 23:39:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.181 23:39:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.181 23:39:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.181 23:39:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.181 23:39:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.181 23:39:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.181 23:39:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.181 23:39:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.181 23:39:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.181 23:39:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.181 23:39:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.181 23:39:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.181 23:39:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.746 23:39:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:22.677 [2024-12-06 23:39:34.204860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.935 [2024-12-06 23:39:34.321725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.935 [2024-12-06 23:39:34.321749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.192 [2024-12-06 23:39:34.521418] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.192 [2024-12-06 23:39:34.521607] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.563 23:39:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.563 spdk_app_start Round 2 00:05:24.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.563 23:39:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:24.563 23:39:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58357 /var/tmp/spdk-nbd.sock 00:05:24.563 23:39:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58357 ']' 00:05:24.563 23:39:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.563 23:39:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.563 23:39:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.563 23:39:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.563 23:39:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.821 23:39:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.821 23:39:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:24.821 23:39:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.080 Malloc0 00:05:25.080 23:39:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.338 Malloc1 00:05:25.338 23:39:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.338 23:39:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.598 /dev/nbd0 00:05:25.598 23:39:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.598 23:39:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.598 1+0 records in 00:05:25.598 1+0 records out 00:05:25.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384741 s, 10.6 MB/s 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.598 23:39:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.598 23:39:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.598 23:39:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.598 23:39:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.857 /dev/nbd1 00:05:25.857 23:39:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.857 23:39:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.857 1+0 records in 00:05:25.857 1+0 records out 00:05:25.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404445 s, 10.1 MB/s 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.857 23:39:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.857 23:39:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.857 23:39:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.857 23:39:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.857 23:39:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.857 23:39:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.115 23:39:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.115 { 00:05:26.115 "nbd_device": "/dev/nbd0", 00:05:26.115 "bdev_name": "Malloc0" 00:05:26.115 }, 00:05:26.115 { 00:05:26.115 "nbd_device": "/dev/nbd1", 00:05:26.115 "bdev_name": "Malloc1" 00:05:26.115 } 00:05:26.115 ]' 00:05:26.115 23:39:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.115 23:39:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.116 { 00:05:26.116 "nbd_device": "/dev/nbd0", 00:05:26.116 "bdev_name": "Malloc0" 00:05:26.116 }, 00:05:26.116 { 00:05:26.116 "nbd_device": "/dev/nbd1", 00:05:26.116 "bdev_name": "Malloc1" 00:05:26.116 } 00:05:26.116 ]' 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.116 /dev/nbd1' 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.116 /dev/nbd1' 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.116 256+0 records in 00:05:26.116 256+0 records out 00:05:26.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136577 s, 76.8 MB/s 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.116 256+0 records in 00:05:26.116 256+0 records out 00:05:26.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025918 s, 40.5 MB/s 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.116 23:39:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.375 256+0 records in 00:05:26.375 256+0 records out 00:05:26.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275231 s, 38.1 MB/s 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.375 23:39:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.633 23:39:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.633 23:39:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.633 23:39:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.633 23:39:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.633 23:39:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.633 23:39:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.633 23:39:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.633 23:39:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.633 23:39:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.633 23:39:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.633 23:39:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.893 23:39:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.893 23:39:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.893 23:39:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.893 23:39:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.893 23:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.893 23:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.893 23:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.893 23:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.893 23:39:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.893 23:39:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.893 23:39:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.893 23:39:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.893 23:39:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.462 23:39:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.842 [2024-12-06 23:39:40.024532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.842 [2024-12-06 23:39:40.138198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.842 [2024-12-06 23:39:40.138198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.842 [2024-12-06 23:39:40.335192] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.842 [2024-12-06 23:39:40.335334] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.751 23:39:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58357 /var/tmp/spdk-nbd.sock 00:05:30.751 23:39:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58357 ']' 00:05:30.751 23:39:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.751 23:39:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.751 23:39:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.751 23:39:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.751 23:39:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.751 23:39:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.751 23:39:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.751 23:39:42 event.app_repeat -- event/event.sh@39 -- # killprocess 58357 00:05:30.751 23:39:42 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58357 ']' 00:05:30.751 23:39:42 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58357 00:05:30.751 23:39:42 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:30.751 23:39:42 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.751 23:39:42 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58357 00:05:30.751 killing process with pid 58357 00:05:30.751 23:39:42 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.751 23:39:42 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.751 23:39:42 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58357' 00:05:30.751 23:39:42 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58357 00:05:30.751 23:39:42 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58357 00:05:31.691 spdk_app_start is called in Round 0. 00:05:31.691 Shutdown signal received, stop current app iteration 00:05:31.691 Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 reinitialization... 00:05:31.691 spdk_app_start is called in Round 1. 00:05:31.691 Shutdown signal received, stop current app iteration 00:05:31.691 Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 reinitialization... 00:05:31.691 spdk_app_start is called in Round 2. 00:05:31.691 Shutdown signal received, stop current app iteration 00:05:31.691 Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 reinitialization... 00:05:31.691 spdk_app_start is called in Round 3. 00:05:31.691 Shutdown signal received, stop current app iteration 00:05:31.691 23:39:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:31.691 23:39:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:31.691 00:05:31.691 real 0m19.432s 00:05:31.691 user 0m41.554s 00:05:31.691 sys 0m2.865s 00:05:31.691 23:39:43 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.691 ************************************ 00:05:31.691 END TEST app_repeat 00:05:31.691 ************************************ 00:05:31.691 23:39:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.691 23:39:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:31.691 23:39:43 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.691 23:39:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.691 23:39:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.691 23:39:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.691 ************************************ 00:05:31.691 START TEST cpu_locks 00:05:31.691 ************************************ 00:05:31.691 23:39:43 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.951 * Looking for test storage... 00:05:31.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.951 23:39:43 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.951 23:39:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.951 23:39:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.951 23:39:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.951 23:39:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.951 23:39:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.951 23:39:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.951 23:39:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.951 23:39:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.951 23:39:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.951 23:39:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.951 23:39:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.951 23:39:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.952 23:39:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:31.952 23:39:43 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.952 23:39:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.952 --rc genhtml_branch_coverage=1 00:05:31.952 --rc genhtml_function_coverage=1 00:05:31.952 --rc genhtml_legend=1 00:05:31.952 --rc geninfo_all_blocks=1 00:05:31.952 --rc geninfo_unexecuted_blocks=1 00:05:31.952 00:05:31.952 ' 00:05:31.952 23:39:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.952 --rc genhtml_branch_coverage=1 00:05:31.952 --rc genhtml_function_coverage=1 00:05:31.952 --rc genhtml_legend=1 00:05:31.952 --rc geninfo_all_blocks=1 00:05:31.952 --rc geninfo_unexecuted_blocks=1 00:05:31.952 00:05:31.952 ' 00:05:31.952 23:39:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.952 --rc genhtml_branch_coverage=1 00:05:31.952 --rc genhtml_function_coverage=1 00:05:31.952 --rc genhtml_legend=1 00:05:31.952 --rc geninfo_all_blocks=1 00:05:31.952 --rc geninfo_unexecuted_blocks=1 00:05:31.952 00:05:31.952 ' 00:05:31.952 23:39:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.952 --rc genhtml_branch_coverage=1 00:05:31.952 --rc genhtml_function_coverage=1 00:05:31.952 --rc genhtml_legend=1 00:05:31.952 --rc geninfo_all_blocks=1 00:05:31.952 --rc geninfo_unexecuted_blocks=1 00:05:31.952 00:05:31.952 ' 00:05:31.952 23:39:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:31.952 23:39:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:31.952 23:39:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:31.952 23:39:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:31.952 23:39:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.952 23:39:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.952 23:39:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.952 ************************************ 00:05:31.952 START TEST default_locks 00:05:31.952 ************************************ 00:05:31.952 23:39:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:31.952 23:39:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58806 00:05:31.952 23:39:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.952 23:39:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58806 00:05:31.952 23:39:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58806 ']' 00:05:31.952 23:39:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.952 23:39:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.952 23:39:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.952 23:39:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.952 23:39:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.212 [2024-12-06 23:39:43.575549] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:05:32.212 [2024-12-06 23:39:43.575695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58806 ] 00:05:32.212 [2024-12-06 23:39:43.749759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.473 [2024-12-06 23:39:43.867251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.414 23:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.414 23:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:33.414 23:39:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58806 00:05:33.414 23:39:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.414 23:39:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58806 00:05:33.414 23:39:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58806 00:05:33.414 23:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58806 ']' 00:05:33.414 23:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58806 00:05:33.414 23:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:33.414 23:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.414 23:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58806 00:05:33.674 killing process with pid 58806 00:05:33.674 23:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.674 23:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.674 23:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58806' 00:05:33.674 23:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58806 00:05:33.674 23:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58806 00:05:36.222 23:39:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58806 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58806 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58806 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58806 ']' 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.223 ERROR: process (pid: 58806) is no longer running 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.223 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58806) - No such process 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.223 00:05:36.223 real 0m3.891s 00:05:36.223 user 0m3.825s 00:05:36.223 sys 0m0.562s 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.223 23:39:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.223 ************************************ 00:05:36.223 END TEST default_locks 00:05:36.223 ************************************ 00:05:36.223 23:39:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:36.223 23:39:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.223 23:39:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.223 23:39:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.223 ************************************ 00:05:36.223 START TEST default_locks_via_rpc 00:05:36.223 ************************************ 00:05:36.223 23:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:36.223 23:39:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58875 00:05:36.223 23:39:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.223 23:39:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58875 00:05:36.223 23:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58875 ']' 00:05:36.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.223 23:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.223 23:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.223 23:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.223 23:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.223 23:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.223 [2024-12-06 23:39:47.527681] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:05:36.223 [2024-12-06 23:39:47.527803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58875 ] 00:05:36.223 [2024-12-06 23:39:47.702869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.483 [2024-12-06 23:39:47.818572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58875 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58875 00:05:37.421 23:39:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.681 23:39:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58875 00:05:37.681 23:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58875 ']' 00:05:37.681 23:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58875 00:05:37.681 23:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:37.681 23:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.681 23:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58875 00:05:37.681 killing process with pid 58875 00:05:37.681 23:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.681 23:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.681 23:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58875' 00:05:37.681 23:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58875 00:05:37.681 23:39:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58875 00:05:40.219 ************************************ 00:05:40.219 END TEST default_locks_via_rpc 00:05:40.219 ************************************ 00:05:40.219 00:05:40.219 real 0m4.120s 00:05:40.219 user 0m4.039s 00:05:40.219 sys 0m0.670s 00:05:40.219 23:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.219 23:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.219 23:39:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:40.219 23:39:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.219 23:39:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.219 23:39:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.219 ************************************ 00:05:40.219 START TEST non_locking_app_on_locked_coremask 00:05:40.219 ************************************ 00:05:40.219 23:39:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:40.219 23:39:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58955 00:05:40.219 23:39:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.219 23:39:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58955 /var/tmp/spdk.sock 00:05:40.219 23:39:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58955 ']' 00:05:40.219 23:39:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.219 23:39:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.219 23:39:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.219 23:39:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.219 23:39:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.219 [2024-12-06 23:39:51.713372] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:05:40.219 [2024-12-06 23:39:51.713506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58955 ] 00:05:40.478 [2024-12-06 23:39:51.887877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.478 [2024-12-06 23:39:51.998514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.416 23:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.416 23:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.416 23:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58971 00:05:41.416 23:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58971 /var/tmp/spdk2.sock 00:05:41.416 23:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58971 ']' 00:05:41.416 23:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.416 23:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.416 23:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.416 23:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.416 23:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.416 23:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.416 [2024-12-06 23:39:52.932895] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:05:41.416 [2024-12-06 23:39:52.933008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58971 ] 00:05:41.675 [2024-12-06 23:39:53.101898] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.675 [2024-12-06 23:39:53.101964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.934 [2024-12-06 23:39:53.335473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58955 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58955 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58955 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58955 ']' 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58955 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58955 00:05:44.469 killing process with pid 58955 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58955' 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58955 00:05:44.469 23:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58955 00:05:49.812 23:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58971 00:05:49.812 23:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58971 ']' 00:05:49.812 23:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58971 00:05:49.812 23:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:49.812 23:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.812 23:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58971 00:05:49.812 killing process with pid 58971 00:05:49.812 23:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.812 23:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.812 23:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58971' 00:05:49.812 23:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58971 00:05:49.812 23:40:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58971 00:05:51.735 ************************************ 00:05:51.735 END TEST non_locking_app_on_locked_coremask 00:05:51.735 ************************************ 00:05:51.735 00:05:51.735 real 0m11.261s 00:05:51.735 user 0m11.358s 00:05:51.735 sys 0m1.237s 00:05:51.735 23:40:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.735 23:40:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.735 23:40:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:51.735 23:40:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.735 23:40:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.735 23:40:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.735 ************************************ 00:05:51.735 START TEST locking_app_on_unlocked_coremask 00:05:51.735 ************************************ 00:05:51.735 23:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:51.735 23:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59121 00:05:51.735 23:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:51.735 23:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59121 /var/tmp/spdk.sock 00:05:51.735 23:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59121 ']' 00:05:51.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.735 23:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.735 23:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.735 23:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.735 23:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.735 23:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.735 [2024-12-06 23:40:03.037020] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:05:51.735 [2024-12-06 23:40:03.037141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59121 ] 00:05:51.735 [2024-12-06 23:40:03.195258] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.735 [2024-12-06 23:40:03.195453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.995 [2024-12-06 23:40:03.305692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.934 23:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.934 23:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.934 23:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59137 00:05:52.934 23:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.934 23:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59137 /var/tmp/spdk2.sock 00:05:52.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.934 23:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59137 ']' 00:05:52.934 23:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.934 23:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.934 23:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.934 23:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.934 23:40:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.934 [2024-12-06 23:40:04.250312] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:05:52.934 [2024-12-06 23:40:04.250453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59137 ] 00:05:52.934 [2024-12-06 23:40:04.419796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.193 [2024-12-06 23:40:04.643931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.726 23:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.726 23:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.726 23:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59137 00:05:55.727 23:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59137 00:05:55.727 23:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.727 23:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59121 00:05:55.727 23:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59121 ']' 00:05:55.727 23:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59121 00:05:55.727 23:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.727 23:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.727 23:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59121 00:05:55.727 23:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.727 killing process with pid 59121 00:05:55.727 23:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.727 23:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59121' 00:05:55.727 23:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59121 00:05:55.727 23:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59121 00:06:00.999 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59137 00:06:00.999 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59137 ']' 00:06:00.999 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59137 00:06:00.999 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.999 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.999 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59137 00:06:00.999 killing process with pid 59137 00:06:00.999 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.999 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.999 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59137' 00:06:00.999 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59137 00:06:00.999 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59137 00:06:02.895 00:06:02.895 real 0m11.358s 00:06:02.895 user 0m11.521s 00:06:02.895 sys 0m1.234s 00:06:02.895 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.895 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.895 ************************************ 00:06:02.895 END TEST locking_app_on_unlocked_coremask 00:06:02.895 ************************************ 00:06:02.895 23:40:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:02.895 23:40:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.895 23:40:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.895 23:40:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.895 ************************************ 00:06:02.895 START TEST locking_app_on_locked_coremask 00:06:02.895 ************************************ 00:06:02.895 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:02.895 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59285 00:06:02.895 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.895 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59285 /var/tmp/spdk.sock 00:06:02.895 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59285 ']' 00:06:02.895 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.895 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.895 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.895 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.895 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.155 [2024-12-06 23:40:14.460421] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:03.155 [2024-12-06 23:40:14.460588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59285 ] 00:06:03.155 [2024-12-06 23:40:14.635340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.414 [2024-12-06 23:40:14.748265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59306 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59306 /var/tmp/spdk2.sock 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59306 /var/tmp/spdk2.sock 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59306 /var/tmp/spdk2.sock 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59306 ']' 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.354 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.354 [2024-12-06 23:40:15.701388] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:04.354 [2024-12-06 23:40:15.701594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59306 ] 00:06:04.354 [2024-12-06 23:40:15.871013] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59285 has claimed it. 00:06:04.354 [2024-12-06 23:40:15.871111] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.950 ERROR: process (pid: 59306) is no longer running 00:06:04.950 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59306) - No such process 00:06:04.950 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.950 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:04.950 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:04.950 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.950 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.950 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.950 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59285 00:06:04.950 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.950 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59285 00:06:05.209 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59285 00:06:05.209 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59285 ']' 00:06:05.209 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59285 00:06:05.209 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.209 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.209 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59285 00:06:05.210 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.210 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.210 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59285' 00:06:05.210 killing process with pid 59285 00:06:05.210 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59285 00:06:05.210 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59285 00:06:07.748 00:06:07.748 real 0m4.668s 00:06:07.748 user 0m4.812s 00:06:07.748 sys 0m0.760s 00:06:07.748 23:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.748 23:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.748 ************************************ 00:06:07.748 END TEST locking_app_on_locked_coremask 00:06:07.748 ************************************ 00:06:07.748 23:40:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:07.748 23:40:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.748 23:40:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.748 23:40:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.748 ************************************ 00:06:07.748 START TEST locking_overlapped_coremask 00:06:07.748 ************************************ 00:06:07.748 23:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:07.748 23:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59370 00:06:07.748 23:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:07.748 23:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59370 /var/tmp/spdk.sock 00:06:07.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.748 23:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59370 ']' 00:06:07.748 23:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.748 23:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.748 23:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.748 23:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.748 23:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.748 [2024-12-06 23:40:19.198989] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:07.748 [2024-12-06 23:40:19.199195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59370 ] 00:06:08.008 [2024-12-06 23:40:19.375318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.008 [2024-12-06 23:40:19.497509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.008 [2024-12-06 23:40:19.497650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.008 [2024-12-06 23:40:19.497729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59388 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59388 /var/tmp/spdk2.sock 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59388 /var/tmp/spdk2.sock 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59388 /var/tmp/spdk2.sock 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59388 ']' 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.949 23:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.949 [2024-12-06 23:40:20.467960] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:08.949 [2024-12-06 23:40:20.468547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59388 ] 00:06:09.209 [2024-12-06 23:40:20.645798] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59370 has claimed it. 00:06:09.209 [2024-12-06 23:40:20.645876] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.780 ERROR: process (pid: 59388) is no longer running 00:06:09.780 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59388) - No such process 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59370 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59370 ']' 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59370 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59370 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59370' 00:06:09.780 killing process with pid 59370 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59370 00:06:09.780 23:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59370 00:06:12.319 00:06:12.319 real 0m4.500s 00:06:12.319 user 0m12.271s 00:06:12.319 sys 0m0.607s 00:06:12.319 23:40:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.319 23:40:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.319 ************************************ 00:06:12.319 END TEST locking_overlapped_coremask 00:06:12.319 ************************************ 00:06:12.320 23:40:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.320 23:40:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.320 23:40:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.320 23:40:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.320 ************************************ 00:06:12.320 START TEST locking_overlapped_coremask_via_rpc 00:06:12.320 ************************************ 00:06:12.320 23:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:12.320 23:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59458 00:06:12.320 23:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.320 23:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59458 /var/tmp/spdk.sock 00:06:12.320 23:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59458 ']' 00:06:12.320 23:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.320 23:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.320 23:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.320 23:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.320 23:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.320 [2024-12-06 23:40:23.761577] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:12.320 [2024-12-06 23:40:23.761819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59458 ] 00:06:12.579 [2024-12-06 23:40:23.936609] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.579 [2024-12-06 23:40:23.936801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.579 [2024-12-06 23:40:24.059454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.579 [2024-12-06 23:40:24.059561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.579 [2024-12-06 23:40:24.059571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.519 23:40:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.519 23:40:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.519 23:40:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:13.519 23:40:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59476 00:06:13.519 23:40:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59476 /var/tmp/spdk2.sock 00:06:13.519 23:40:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59476 ']' 00:06:13.519 23:40:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.519 23:40:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.519 23:40:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.519 23:40:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.519 23:40:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.519 [2024-12-06 23:40:25.027436] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:13.519 [2024-12-06 23:40:25.027635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59476 ] 00:06:13.778 [2024-12-06 23:40:25.198723] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.778 [2024-12-06 23:40:25.198778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.036 [2024-12-06 23:40:25.433200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.036 [2024-12-06 23:40:25.433234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.036 [2024-12-06 23:40:25.433285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.578 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.578 [2024-12-06 23:40:27.618946] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59458 has claimed it. 00:06:16.578 request: 00:06:16.578 { 00:06:16.578 "method": "framework_enable_cpumask_locks", 00:06:16.578 "req_id": 1 00:06:16.578 } 00:06:16.578 Got JSON-RPC error response 00:06:16.578 response: 00:06:16.578 { 00:06:16.578 "code": -32603, 00:06:16.578 "message": "Failed to claim CPU core: 2" 00:06:16.578 } 00:06:16.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59458 /var/tmp/spdk.sock 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59458 ']' 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59476 /var/tmp/spdk2.sock 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59476 ']' 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.579 23:40:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.579 23:40:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.579 23:40:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.579 23:40:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:16.579 23:40:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:16.579 23:40:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:16.579 23:40:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:16.579 00:06:16.579 real 0m4.390s 00:06:16.579 user 0m1.318s 00:06:16.579 sys 0m0.197s 00:06:16.579 23:40:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.579 23:40:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.579 ************************************ 00:06:16.579 END TEST locking_overlapped_coremask_via_rpc 00:06:16.579 ************************************ 00:06:16.579 23:40:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:16.579 23:40:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59458 ]] 00:06:16.579 23:40:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59458 00:06:16.579 23:40:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59458 ']' 00:06:16.579 23:40:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59458 00:06:16.579 23:40:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:16.579 23:40:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.579 23:40:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59458 00:06:16.839 23:40:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.839 23:40:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.839 23:40:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59458' 00:06:16.839 killing process with pid 59458 00:06:16.839 23:40:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59458 00:06:16.839 23:40:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59458 00:06:19.394 23:40:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59476 ]] 00:06:19.394 23:40:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59476 00:06:19.394 23:40:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59476 ']' 00:06:19.394 23:40:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59476 00:06:19.394 23:40:30 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:19.394 23:40:30 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.394 23:40:30 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59476 00:06:19.394 killing process with pid 59476 00:06:19.394 23:40:30 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:19.394 23:40:30 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:19.394 23:40:30 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59476' 00:06:19.394 23:40:30 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59476 00:06:19.394 23:40:30 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59476 00:06:21.940 23:40:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.940 23:40:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:21.940 23:40:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59458 ]] 00:06:21.940 23:40:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59458 00:06:21.940 Process with pid 59458 is not found 00:06:21.940 23:40:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59458 ']' 00:06:21.940 23:40:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59458 00:06:21.940 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59458) - No such process 00:06:21.940 23:40:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59458 is not found' 00:06:21.940 23:40:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59476 ]] 00:06:21.940 23:40:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59476 00:06:21.940 23:40:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59476 ']' 00:06:21.940 23:40:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59476 00:06:21.940 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59476) - No such process 00:06:21.940 23:40:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59476 is not found' 00:06:21.940 Process with pid 59476 is not found 00:06:21.940 23:40:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.940 00:06:21.940 real 0m49.829s 00:06:21.940 user 1m25.310s 00:06:21.940 sys 0m6.597s 00:06:21.940 23:40:33 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.940 23:40:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.940 ************************************ 00:06:21.940 END TEST cpu_locks 00:06:21.940 ************************************ 00:06:21.940 00:06:21.940 real 1m19.079s 00:06:21.940 user 2m21.660s 00:06:21.940 sys 0m10.724s 00:06:21.940 23:40:33 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.940 23:40:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.940 ************************************ 00:06:21.940 END TEST event 00:06:21.940 ************************************ 00:06:21.940 23:40:33 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:21.940 23:40:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.940 23:40:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.940 23:40:33 -- common/autotest_common.sh@10 -- # set +x 00:06:21.940 ************************************ 00:06:21.940 START TEST thread 00:06:21.940 ************************************ 00:06:21.940 23:40:33 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:21.940 * Looking for test storage... 00:06:21.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:21.940 23:40:33 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.940 23:40:33 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.940 23:40:33 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.940 23:40:33 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.940 23:40:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.940 23:40:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.940 23:40:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.940 23:40:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.940 23:40:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.940 23:40:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.940 23:40:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.940 23:40:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.940 23:40:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.940 23:40:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.941 23:40:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.941 23:40:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:21.941 23:40:33 thread -- scripts/common.sh@345 -- # : 1 00:06:21.941 23:40:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.941 23:40:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.941 23:40:33 thread -- scripts/common.sh@365 -- # decimal 1 00:06:21.941 23:40:33 thread -- scripts/common.sh@353 -- # local d=1 00:06:21.941 23:40:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.941 23:40:33 thread -- scripts/common.sh@355 -- # echo 1 00:06:21.941 23:40:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.941 23:40:33 thread -- scripts/common.sh@366 -- # decimal 2 00:06:21.941 23:40:33 thread -- scripts/common.sh@353 -- # local d=2 00:06:21.941 23:40:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.941 23:40:33 thread -- scripts/common.sh@355 -- # echo 2 00:06:21.941 23:40:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.941 23:40:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.941 23:40:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.941 23:40:33 thread -- scripts/common.sh@368 -- # return 0 00:06:21.941 23:40:33 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.941 23:40:33 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.941 --rc genhtml_branch_coverage=1 00:06:21.941 --rc genhtml_function_coverage=1 00:06:21.941 --rc genhtml_legend=1 00:06:21.941 --rc geninfo_all_blocks=1 00:06:21.941 --rc geninfo_unexecuted_blocks=1 00:06:21.941 00:06:21.941 ' 00:06:21.941 23:40:33 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.941 --rc genhtml_branch_coverage=1 00:06:21.941 --rc genhtml_function_coverage=1 00:06:21.941 --rc genhtml_legend=1 00:06:21.941 --rc geninfo_all_blocks=1 00:06:21.941 --rc geninfo_unexecuted_blocks=1 00:06:21.941 00:06:21.941 ' 00:06:21.941 23:40:33 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.941 --rc genhtml_branch_coverage=1 00:06:21.941 --rc genhtml_function_coverage=1 00:06:21.941 --rc genhtml_legend=1 00:06:21.941 --rc geninfo_all_blocks=1 00:06:21.941 --rc geninfo_unexecuted_blocks=1 00:06:21.941 00:06:21.941 ' 00:06:21.941 23:40:33 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.941 --rc genhtml_branch_coverage=1 00:06:21.941 --rc genhtml_function_coverage=1 00:06:21.941 --rc genhtml_legend=1 00:06:21.941 --rc geninfo_all_blocks=1 00:06:21.941 --rc geninfo_unexecuted_blocks=1 00:06:21.941 00:06:21.941 ' 00:06:21.941 23:40:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.941 23:40:33 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:21.941 23:40:33 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.941 23:40:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.941 ************************************ 00:06:21.941 START TEST thread_poller_perf 00:06:21.941 ************************************ 00:06:21.941 23:40:33 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.941 [2024-12-06 23:40:33.466230] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:21.941 [2024-12-06 23:40:33.466428] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59671 ] 00:06:22.202 [2024-12-06 23:40:33.640613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.202 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:22.202 [2024-12-06 23:40:33.754171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.577 [2024-12-06T23:40:35.140Z] ====================================== 00:06:23.577 [2024-12-06T23:40:35.140Z] busy:2298005816 (cyc) 00:06:23.577 [2024-12-06T23:40:35.140Z] total_run_count: 400000 00:06:23.577 [2024-12-06T23:40:35.140Z] tsc_hz: 2290000000 (cyc) 00:06:23.577 [2024-12-06T23:40:35.140Z] ====================================== 00:06:23.577 [2024-12-06T23:40:35.140Z] poller_cost: 5745 (cyc), 2508 (nsec) 00:06:23.577 00:06:23.577 ************************************ 00:06:23.577 END TEST thread_poller_perf 00:06:23.577 ************************************ 00:06:23.577 real 0m1.569s 00:06:23.577 user 0m1.366s 00:06:23.577 sys 0m0.094s 00:06:23.577 23:40:34 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.577 23:40:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.577 23:40:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.577 23:40:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:23.577 23:40:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.577 23:40:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.577 ************************************ 00:06:23.577 START TEST thread_poller_perf 00:06:23.577 ************************************ 00:06:23.577 23:40:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.577 [2024-12-06 23:40:35.102770] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:23.577 [2024-12-06 23:40:35.102874] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59713 ] 00:06:23.835 [2024-12-06 23:40:35.274025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.835 [2024-12-06 23:40:35.386256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.835 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:25.209 [2024-12-06T23:40:36.773Z] ====================================== 00:06:25.210 [2024-12-06T23:40:36.773Z] busy:2294173494 (cyc) 00:06:25.210 [2024-12-06T23:40:36.773Z] total_run_count: 4886000 00:06:25.210 [2024-12-06T23:40:36.773Z] tsc_hz: 2290000000 (cyc) 00:06:25.210 [2024-12-06T23:40:36.773Z] ====================================== 00:06:25.210 [2024-12-06T23:40:36.773Z] poller_cost: 469 (cyc), 204 (nsec) 00:06:25.210 00:06:25.210 real 0m1.552s 00:06:25.210 user 0m1.348s 00:06:25.210 sys 0m0.095s 00:06:25.210 23:40:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.210 23:40:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.210 ************************************ 00:06:25.210 END TEST thread_poller_perf 00:06:25.210 ************************************ 00:06:25.210 23:40:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:25.210 00:06:25.210 real 0m3.478s 00:06:25.210 user 0m2.877s 00:06:25.210 sys 0m0.397s 00:06:25.210 23:40:36 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.210 23:40:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.210 ************************************ 00:06:25.210 END TEST thread 00:06:25.210 ************************************ 00:06:25.210 23:40:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:25.210 23:40:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:25.210 23:40:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.210 23:40:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.210 23:40:36 -- common/autotest_common.sh@10 -- # set +x 00:06:25.210 ************************************ 00:06:25.210 START TEST app_cmdline 00:06:25.210 ************************************ 00:06:25.210 23:40:36 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:25.469 * Looking for test storage... 00:06:25.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:25.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.469 23:40:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.469 --rc genhtml_branch_coverage=1 00:06:25.469 --rc genhtml_function_coverage=1 00:06:25.469 --rc genhtml_legend=1 00:06:25.469 --rc geninfo_all_blocks=1 00:06:25.469 --rc geninfo_unexecuted_blocks=1 00:06:25.469 00:06:25.469 ' 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.469 --rc genhtml_branch_coverage=1 00:06:25.469 --rc genhtml_function_coverage=1 00:06:25.469 --rc genhtml_legend=1 00:06:25.469 --rc geninfo_all_blocks=1 00:06:25.469 --rc geninfo_unexecuted_blocks=1 00:06:25.469 00:06:25.469 ' 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.469 --rc genhtml_branch_coverage=1 00:06:25.469 --rc genhtml_function_coverage=1 00:06:25.469 --rc genhtml_legend=1 00:06:25.469 --rc geninfo_all_blocks=1 00:06:25.469 --rc geninfo_unexecuted_blocks=1 00:06:25.469 00:06:25.469 ' 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.469 --rc genhtml_branch_coverage=1 00:06:25.469 --rc genhtml_function_coverage=1 00:06:25.469 --rc genhtml_legend=1 00:06:25.469 --rc geninfo_all_blocks=1 00:06:25.469 --rc geninfo_unexecuted_blocks=1 00:06:25.469 00:06:25.469 ' 00:06:25.469 23:40:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:25.469 23:40:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59802 00:06:25.469 23:40:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59802 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59802 ']' 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.469 23:40:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.469 23:40:36 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:25.469 [2024-12-06 23:40:37.007200] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:25.469 [2024-12-06 23:40:37.007317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59802 ] 00:06:25.727 [2024-12-06 23:40:37.181853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.986 [2024-12-06 23:40:37.297475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:26.923 23:40:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:26.923 { 00:06:26.923 "version": "SPDK v25.01-pre git sha1 dd2b3744d", 00:06:26.923 "fields": { 00:06:26.923 "major": 25, 00:06:26.923 "minor": 1, 00:06:26.923 "patch": 0, 00:06:26.923 "suffix": "-pre", 00:06:26.923 "commit": "dd2b3744d" 00:06:26.923 } 00:06:26.923 } 00:06:26.923 23:40:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:26.923 23:40:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:26.923 23:40:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:26.923 23:40:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:26.923 23:40:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.923 23:40:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:26.923 23:40:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.923 23:40:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:26.923 23:40:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:26.923 23:40:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:26.923 23:40:38 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.181 request: 00:06:27.181 { 00:06:27.181 "method": "env_dpdk_get_mem_stats", 00:06:27.181 "req_id": 1 00:06:27.181 } 00:06:27.181 Got JSON-RPC error response 00:06:27.181 response: 00:06:27.181 { 00:06:27.181 "code": -32601, 00:06:27.181 "message": "Method not found" 00:06:27.181 } 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.181 23:40:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59802 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59802 ']' 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59802 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59802 00:06:27.181 killing process with pid 59802 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59802' 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@973 -- # kill 59802 00:06:27.181 23:40:38 app_cmdline -- common/autotest_common.sh@978 -- # wait 59802 00:06:29.717 00:06:29.717 real 0m4.351s 00:06:29.717 user 0m4.537s 00:06:29.717 sys 0m0.616s 00:06:29.717 ************************************ 00:06:29.717 END TEST app_cmdline 00:06:29.717 ************************************ 00:06:29.717 23:40:41 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.717 23:40:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.717 23:40:41 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:29.717 23:40:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.717 23:40:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.717 23:40:41 -- common/autotest_common.sh@10 -- # set +x 00:06:29.717 ************************************ 00:06:29.717 START TEST version 00:06:29.717 ************************************ 00:06:29.717 23:40:41 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:29.717 * Looking for test storage... 00:06:29.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:29.717 23:40:41 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.717 23:40:41 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.717 23:40:41 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.977 23:40:41 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.977 23:40:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.977 23:40:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.977 23:40:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.977 23:40:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.977 23:40:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.977 23:40:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.977 23:40:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.977 23:40:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.977 23:40:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.977 23:40:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.977 23:40:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.977 23:40:41 version -- scripts/common.sh@344 -- # case "$op" in 00:06:29.977 23:40:41 version -- scripts/common.sh@345 -- # : 1 00:06:29.977 23:40:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.977 23:40:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.977 23:40:41 version -- scripts/common.sh@365 -- # decimal 1 00:06:29.977 23:40:41 version -- scripts/common.sh@353 -- # local d=1 00:06:29.977 23:40:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.977 23:40:41 version -- scripts/common.sh@355 -- # echo 1 00:06:29.977 23:40:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.977 23:40:41 version -- scripts/common.sh@366 -- # decimal 2 00:06:29.977 23:40:41 version -- scripts/common.sh@353 -- # local d=2 00:06:29.977 23:40:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.977 23:40:41 version -- scripts/common.sh@355 -- # echo 2 00:06:29.977 23:40:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.977 23:40:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.977 23:40:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.977 23:40:41 version -- scripts/common.sh@368 -- # return 0 00:06:29.977 23:40:41 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.977 23:40:41 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.977 --rc genhtml_branch_coverage=1 00:06:29.977 --rc genhtml_function_coverage=1 00:06:29.977 --rc genhtml_legend=1 00:06:29.977 --rc geninfo_all_blocks=1 00:06:29.977 --rc geninfo_unexecuted_blocks=1 00:06:29.977 00:06:29.977 ' 00:06:29.977 23:40:41 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.977 --rc genhtml_branch_coverage=1 00:06:29.977 --rc genhtml_function_coverage=1 00:06:29.977 --rc genhtml_legend=1 00:06:29.977 --rc geninfo_all_blocks=1 00:06:29.977 --rc geninfo_unexecuted_blocks=1 00:06:29.977 00:06:29.977 ' 00:06:29.977 23:40:41 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.977 --rc genhtml_branch_coverage=1 00:06:29.977 --rc genhtml_function_coverage=1 00:06:29.977 --rc genhtml_legend=1 00:06:29.977 --rc geninfo_all_blocks=1 00:06:29.977 --rc geninfo_unexecuted_blocks=1 00:06:29.977 00:06:29.977 ' 00:06:29.977 23:40:41 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.977 --rc genhtml_branch_coverage=1 00:06:29.977 --rc genhtml_function_coverage=1 00:06:29.977 --rc genhtml_legend=1 00:06:29.977 --rc geninfo_all_blocks=1 00:06:29.977 --rc geninfo_unexecuted_blocks=1 00:06:29.977 00:06:29.977 ' 00:06:29.977 23:40:41 version -- app/version.sh@17 -- # get_header_version major 00:06:29.977 23:40:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.977 23:40:41 version -- app/version.sh@14 -- # cut -f2 00:06:29.977 23:40:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.977 23:40:41 version -- app/version.sh@17 -- # major=25 00:06:29.977 23:40:41 version -- app/version.sh@18 -- # get_header_version minor 00:06:29.977 23:40:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.977 23:40:41 version -- app/version.sh@14 -- # cut -f2 00:06:29.977 23:40:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.977 23:40:41 version -- app/version.sh@18 -- # minor=1 00:06:29.977 23:40:41 version -- app/version.sh@19 -- # get_header_version patch 00:06:29.977 23:40:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.977 23:40:41 version -- app/version.sh@14 -- # cut -f2 00:06:29.977 23:40:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.977 23:40:41 version -- app/version.sh@19 -- # patch=0 00:06:29.977 23:40:41 version -- app/version.sh@20 -- # get_header_version suffix 00:06:29.977 23:40:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.977 23:40:41 version -- app/version.sh@14 -- # cut -f2 00:06:29.977 23:40:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.977 23:40:41 version -- app/version.sh@20 -- # suffix=-pre 00:06:29.977 23:40:41 version -- app/version.sh@22 -- # version=25.1 00:06:29.977 23:40:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:29.977 23:40:41 version -- app/version.sh@28 -- # version=25.1rc0 00:06:29.977 23:40:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:29.977 23:40:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:29.977 23:40:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:29.977 23:40:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:29.977 ************************************ 00:06:29.977 END TEST version 00:06:29.977 ************************************ 00:06:29.977 00:06:29.977 real 0m0.322s 00:06:29.977 user 0m0.200s 00:06:29.977 sys 0m0.178s 00:06:29.977 23:40:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.977 23:40:41 version -- common/autotest_common.sh@10 -- # set +x 00:06:29.977 23:40:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:29.977 23:40:41 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:29.977 23:40:41 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:29.977 23:40:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.977 23:40:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.977 23:40:41 -- common/autotest_common.sh@10 -- # set +x 00:06:29.977 ************************************ 00:06:29.977 START TEST bdev_raid 00:06:29.977 ************************************ 00:06:29.977 23:40:41 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:30.238 * Looking for test storage... 00:06:30.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:30.238 23:40:41 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.238 23:40:41 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.238 23:40:41 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.238 23:40:41 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.238 23:40:41 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:30.238 23:40:41 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.238 23:40:41 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.238 --rc genhtml_branch_coverage=1 00:06:30.238 --rc genhtml_function_coverage=1 00:06:30.238 --rc genhtml_legend=1 00:06:30.238 --rc geninfo_all_blocks=1 00:06:30.238 --rc geninfo_unexecuted_blocks=1 00:06:30.238 00:06:30.238 ' 00:06:30.238 23:40:41 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.238 --rc genhtml_branch_coverage=1 00:06:30.238 --rc genhtml_function_coverage=1 00:06:30.238 --rc genhtml_legend=1 00:06:30.238 --rc geninfo_all_blocks=1 00:06:30.238 --rc geninfo_unexecuted_blocks=1 00:06:30.238 00:06:30.238 ' 00:06:30.238 23:40:41 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.238 --rc genhtml_branch_coverage=1 00:06:30.238 --rc genhtml_function_coverage=1 00:06:30.238 --rc genhtml_legend=1 00:06:30.238 --rc geninfo_all_blocks=1 00:06:30.238 --rc geninfo_unexecuted_blocks=1 00:06:30.238 00:06:30.238 ' 00:06:30.238 23:40:41 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.238 --rc genhtml_branch_coverage=1 00:06:30.238 --rc genhtml_function_coverage=1 00:06:30.238 --rc genhtml_legend=1 00:06:30.238 --rc geninfo_all_blocks=1 00:06:30.238 --rc geninfo_unexecuted_blocks=1 00:06:30.238 00:06:30.238 ' 00:06:30.238 23:40:41 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:30.238 23:40:41 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:30.238 23:40:41 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:30.238 23:40:41 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:30.238 23:40:41 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:30.238 23:40:41 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:30.238 23:40:41 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:30.238 23:40:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.238 23:40:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.238 23:40:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:30.238 ************************************ 00:06:30.238 START TEST raid1_resize_data_offset_test 00:06:30.238 ************************************ 00:06:30.238 23:40:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:30.238 23:40:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59984 00:06:30.238 23:40:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:30.238 23:40:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59984' 00:06:30.238 Process raid pid: 59984 00:06:30.238 23:40:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59984 00:06:30.238 23:40:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59984 ']' 00:06:30.238 23:40:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.238 23:40:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.238 23:40:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.238 23:40:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.238 23:40:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.499 [2024-12-06 23:40:41.867584] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:30.499 [2024-12-06 23:40:41.867817] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.499 [2024-12-06 23:40:42.044062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.759 [2024-12-06 23:40:42.164587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.019 [2024-12-06 23:40:42.369322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.019 [2024-12-06 23:40:42.369382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.291 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.291 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:31.291 23:40:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:31.291 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.291 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.291 malloc0 00:06:31.291 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.291 23:40:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:31.291 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.291 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.551 malloc1 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.551 null0 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.551 [2024-12-06 23:40:42.882119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:31.551 [2024-12-06 23:40:42.884026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:31.551 [2024-12-06 23:40:42.884086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:31.551 [2024-12-06 23:40:42.884257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:31.551 [2024-12-06 23:40:42.884271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:31.551 [2024-12-06 23:40:42.884547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:31.551 [2024-12-06 23:40:42.884763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:31.551 [2024-12-06 23:40:42.884780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:31.551 [2024-12-06 23:40:42.884928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.551 [2024-12-06 23:40:42.946060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.551 23:40:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.122 malloc2 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.122 [2024-12-06 23:40:43.508062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:32.122 [2024-12-06 23:40:43.525703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.122 [2024-12-06 23:40:43.527640] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59984 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59984 ']' 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59984 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59984 00:06:32.122 killing process with pid 59984 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59984' 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59984 00:06:32.122 23:40:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59984 00:06:32.122 [2024-12-06 23:40:43.610643] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:32.122 [2024-12-06 23:40:43.612249] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:32.122 [2024-12-06 23:40:43.612318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.122 [2024-12-06 23:40:43.612339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:32.122 [2024-12-06 23:40:43.648911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:32.122 [2024-12-06 23:40:43.649227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:32.122 [2024-12-06 23:40:43.649246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:34.050 [2024-12-06 23:40:45.415367] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:34.987 23:40:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:34.987 00:06:34.987 real 0m4.752s 00:06:34.987 user 0m4.628s 00:06:34.987 sys 0m0.552s 00:06:34.987 23:40:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.987 ************************************ 00:06:34.987 END TEST raid1_resize_data_offset_test 00:06:34.987 ************************************ 00:06:34.987 23:40:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.247 23:40:46 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:35.247 23:40:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:35.247 23:40:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.247 23:40:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:35.247 ************************************ 00:06:35.247 START TEST raid0_resize_superblock_test 00:06:35.247 ************************************ 00:06:35.247 23:40:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:35.247 23:40:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:35.247 23:40:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60073 00:06:35.247 Process raid pid: 60073 00:06:35.247 23:40:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:35.247 23:40:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60073' 00:06:35.247 23:40:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60073 00:06:35.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.247 23:40:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60073 ']' 00:06:35.247 23:40:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.247 23:40:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.247 23:40:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.247 23:40:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.247 23:40:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.247 [2024-12-06 23:40:46.686724] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:35.247 [2024-12-06 23:40:46.686932] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.506 [2024-12-06 23:40:46.863933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.506 [2024-12-06 23:40:46.969401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.766 [2024-12-06 23:40:47.171341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.766 [2024-12-06 23:40:47.171448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.026 23:40:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.026 23:40:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:36.026 23:40:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:36.026 23:40:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.026 23:40:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.595 malloc0 00:06:36.595 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.595 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:36.595 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.595 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.595 [2024-12-06 23:40:48.035032] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:36.595 [2024-12-06 23:40:48.035171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.595 [2024-12-06 23:40:48.035216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:36.595 [2024-12-06 23:40:48.035256] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.595 [2024-12-06 23:40:48.037361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.595 [2024-12-06 23:40:48.037418] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:36.595 pt0 00:06:36.595 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.595 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:36.595 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.595 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.595 011df7db-1214-4e4d-9d2c-d39af76a76f7 00:06:36.595 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.595 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:36.595 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.595 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.596 01f84bc7-dfd6-4659-aa2e-ca02ef30cba0 00:06:36.596 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.596 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:36.596 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.596 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.856 576538eb-63a9-4506-8755-6d40b855acf4 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.856 [2024-12-06 23:40:48.168474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 01f84bc7-dfd6-4659-aa2e-ca02ef30cba0 is claimed 00:06:36.856 [2024-12-06 23:40:48.168675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 576538eb-63a9-4506-8755-6d40b855acf4 is claimed 00:06:36.856 [2024-12-06 23:40:48.168901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:36.856 [2024-12-06 23:40:48.168973] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:36.856 [2024-12-06 23:40:48.169286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:36.856 [2024-12-06 23:40:48.169539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:36.856 [2024-12-06 23:40:48.169593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:36.856 [2024-12-06 23:40:48.169848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.856 [2024-12-06 23:40:48.284526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.856 [2024-12-06 23:40:48.316452] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:36.856 [2024-12-06 23:40:48.316530] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '01f84bc7-dfd6-4659-aa2e-ca02ef30cba0' was resized: old size 131072, new size 204800 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.856 [2024-12-06 23:40:48.328320] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:36.856 [2024-12-06 23:40:48.328391] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '576538eb-63a9-4506-8755-6d40b855acf4' was resized: old size 131072, new size 204800 00:06:36.856 [2024-12-06 23:40:48.328446] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.856 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.117 [2024-12-06 23:40:48.444177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.117 [2024-12-06 23:40:48.467957] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:37.117 [2024-12-06 23:40:48.468077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:37.117 [2024-12-06 23:40:48.468116] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:37.117 [2024-12-06 23:40:48.468166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:37.117 [2024-12-06 23:40:48.468302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:37.117 [2024-12-06 23:40:48.468373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:37.117 [2024-12-06 23:40:48.468437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.117 [2024-12-06 23:40:48.479881] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:37.117 [2024-12-06 23:40:48.479981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.117 [2024-12-06 23:40:48.480008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:37.117 [2024-12-06 23:40:48.480021] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.117 [2024-12-06 23:40:48.482211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.117 [2024-12-06 23:40:48.482260] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:37.117 pt0 00:06:37.117 [2024-12-06 23:40:48.484080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 01f84bc7-dfd6-4659-aa2e-ca02ef30cba0 00:06:37.117 [2024-12-06 23:40:48.484160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 01f84bc7-dfd6-4659-aa2e-ca02ef30cba0 is claimed 00:06:37.117 [2024-12-06 23:40:48.484274] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 576538eb-63a9-4506-8755-6d40b855acf4 00:06:37.117 [2024-12-06 23:40:48.484307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 576538eb-63a9-4506-8755-6d40b855acf4 is claimed 00:06:37.117 [2024-12-06 23:40:48.484477] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 576538eb-63a9-4506-8755-6d40b855acf4 (2) smaller than existing raid bdev Raid (3) 00:06:37.117 [2024-12-06 23:40:48.484505] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 01f84bc7-dfd6-4659-aa2e-ca02ef30cba0: File exists 00:06:37.117 [2024-12-06 23:40:48.484552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:37.117 [2024-12-06 23:40:48.484565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:37.117 [2024-12-06 23:40:48.484838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:37.117 [2024-12-06 23:40:48.485001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:37.117 [2024-12-06 23:40:48.485012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:37.117 [2024-12-06 23:40:48.485174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.117 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:37.118 [2024-12-06 23:40:48.504338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60073 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60073 ']' 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60073 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60073 00:06:37.118 killing process with pid 60073 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60073' 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60073 00:06:37.118 [2024-12-06 23:40:48.590303] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:37.118 [2024-12-06 23:40:48.590364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:37.118 [2024-12-06 23:40:48.590404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:37.118 [2024-12-06 23:40:48.590413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:37.118 23:40:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60073 00:06:38.500 [2024-12-06 23:40:49.971899] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:39.882 23:40:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:39.882 00:06:39.882 real 0m4.475s 00:06:39.882 user 0m4.664s 00:06:39.882 sys 0m0.566s 00:06:39.882 ************************************ 00:06:39.882 END TEST raid0_resize_superblock_test 00:06:39.882 ************************************ 00:06:39.882 23:40:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.882 23:40:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.882 23:40:51 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:39.882 23:40:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:39.882 23:40:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.882 23:40:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:39.882 ************************************ 00:06:39.882 START TEST raid1_resize_superblock_test 00:06:39.882 ************************************ 00:06:39.882 23:40:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:39.882 23:40:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:39.882 23:40:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60166 00:06:39.882 Process raid pid: 60166 00:06:39.882 23:40:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:39.882 23:40:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60166' 00:06:39.882 23:40:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60166 00:06:39.882 23:40:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60166 ']' 00:06:39.882 23:40:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.882 23:40:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.882 23:40:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.882 23:40:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.882 23:40:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.882 [2024-12-06 23:40:51.231292] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:39.882 [2024-12-06 23:40:51.231407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.882 [2024-12-06 23:40:51.403377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.142 [2024-12-06 23:40:51.516562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.504 [2024-12-06 23:40:51.716432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.504 [2024-12-06 23:40:51.716466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.504 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.504 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:40.504 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:40.504 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.504 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.084 malloc0 00:06:41.084 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.084 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:41.084 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.084 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.084 [2024-12-06 23:40:52.591793] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:41.084 [2024-12-06 23:40:52.591925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.084 [2024-12-06 23:40:52.591954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:41.084 [2024-12-06 23:40:52.591968] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.084 [2024-12-06 23:40:52.594124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.084 [2024-12-06 23:40:52.594232] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:41.084 pt0 00:06:41.084 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.084 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:41.084 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.084 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.344 c772644a-a249-4ac9-9b8f-c8e7306281fd 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.344 cbd5590b-4fb2-49ec-ac10-b3b967098956 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.344 8f4b50e0-ed84-448c-a083-e6dc991bb3f3 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.344 [2024-12-06 23:40:52.725269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cbd5590b-4fb2-49ec-ac10-b3b967098956 is claimed 00:06:41.344 [2024-12-06 23:40:52.725440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8f4b50e0-ed84-448c-a083-e6dc991bb3f3 is claimed 00:06:41.344 [2024-12-06 23:40:52.725655] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:41.344 [2024-12-06 23:40:52.725735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:41.344 [2024-12-06 23:40:52.726032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:41.344 [2024-12-06 23:40:52.726296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:41.344 [2024-12-06 23:40:52.726352] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:41.344 [2024-12-06 23:40:52.726595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:41.344 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:41.345 [2024-12-06 23:40:52.837300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.345 [2024-12-06 23:40:52.885261] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:41.345 [2024-12-06 23:40:52.885348] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cbd5590b-4fb2-49ec-ac10-b3b967098956' was resized: old size 131072, new size 204800 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.345 [2024-12-06 23:40:52.897093] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:41.345 [2024-12-06 23:40:52.897169] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8f4b50e0-ed84-448c-a083-e6dc991bb3f3' was resized: old size 131072, new size 204800 00:06:41.345 [2024-12-06 23:40:52.897205] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:41.345 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.605 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.606 [2024-12-06 23:40:53.009048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:41.606 23:40:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.606 [2024-12-06 23:40:53.040785] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:41.606 [2024-12-06 23:40:53.040911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:41.606 [2024-12-06 23:40:53.040960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:41.606 [2024-12-06 23:40:53.041153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:41.606 [2024-12-06 23:40:53.041419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:41.606 [2024-12-06 23:40:53.041543] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:41.606 [2024-12-06 23:40:53.041608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.606 [2024-12-06 23:40:53.052635] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:41.606 [2024-12-06 23:40:53.052753] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.606 [2024-12-06 23:40:53.052794] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:41.606 [2024-12-06 23:40:53.052833] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.606 [2024-12-06 23:40:53.055063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.606 [2024-12-06 23:40:53.055154] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:41.606 [2024-12-06 23:40:53.056923] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cbd5590b-4fb2-49ec-ac10-b3b967098956 00:06:41.606 [2024-12-06 23:40:53.057071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cbd5590b-4fb2-49ec-ac10-b3b967098956 is claimed 00:06:41.606 [2024-12-06 23:40:53.057257] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8f4b50e0-ed84-448c-a083-e6dc991bb3f3 00:06:41.606 [2024-12-06 23:40:53.057330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8f4b50e0-ed84-448c-a083-e6dc991bb3f3 is claimed 00:06:41.606 [2024-12-06 23:40:53.057556] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 8f4b50e0-ed84-448c-a083-e6dc991bb3f3 (2) smaller than existing raid bdev Raid (3) 00:06:41.606 [2024-12-06 23:40:53.057637] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev cbd5590b-4fb2-49ec-ac10-b3b967098956: File exists 00:06:41.606 [2024-12-06 23:40:53.057745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:41.606 [2024-12-06 23:40:53.057784] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:41.606 pt0 00:06:41.606 [2024-12-06 23:40:53.058058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:41.606 [2024-12-06 23:40:53.058289] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:41.606 [2024-12-06 23:40:53.058340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:41.606 [2024-12-06 23:40:53.058536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.606 [2024-12-06 23:40:53.081229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60166 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60166 ']' 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60166 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.606 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60166 00:06:41.866 killing process with pid 60166 00:06:41.867 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.867 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.867 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60166' 00:06:41.867 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60166 00:06:41.867 [2024-12-06 23:40:53.168130] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:41.867 [2024-12-06 23:40:53.168226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:41.867 23:40:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60166 00:06:41.867 [2024-12-06 23:40:53.168288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:41.867 [2024-12-06 23:40:53.168298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:43.246 [2024-12-06 23:40:54.565095] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:44.184 23:40:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:44.184 00:06:44.184 real 0m4.511s 00:06:44.184 user 0m4.685s 00:06:44.184 sys 0m0.582s 00:06:44.184 23:40:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.184 ************************************ 00:06:44.184 END TEST raid1_resize_superblock_test 00:06:44.184 ************************************ 00:06:44.184 23:40:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.184 23:40:55 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:44.184 23:40:55 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:44.184 23:40:55 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:44.184 23:40:55 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:44.184 23:40:55 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:44.184 23:40:55 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:44.184 23:40:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.184 23:40:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.184 23:40:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:44.443 ************************************ 00:06:44.443 START TEST raid_function_test_raid0 00:06:44.443 ************************************ 00:06:44.443 23:40:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:44.443 23:40:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:44.443 23:40:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:44.443 23:40:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:44.443 23:40:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60269 00:06:44.443 23:40:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:44.444 23:40:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60269' 00:06:44.444 Process raid pid: 60269 00:06:44.444 23:40:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60269 00:06:44.444 23:40:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60269 ']' 00:06:44.444 23:40:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.444 23:40:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.444 23:40:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.444 23:40:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.444 23:40:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:44.444 [2024-12-06 23:40:55.833086] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:44.444 [2024-12-06 23:40:55.833283] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.703 [2024-12-06 23:40:56.004948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.703 [2024-12-06 23:40:56.121073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.962 [2024-12-06 23:40:56.330934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.962 [2024-12-06 23:40:56.331074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.223 Base_1 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.223 Base_2 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.223 [2024-12-06 23:40:56.747255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:45.223 [2024-12-06 23:40:56.749080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:45.223 [2024-12-06 23:40:56.749206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:45.223 [2024-12-06 23:40:56.749250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:45.223 [2024-12-06 23:40:56.749544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:45.223 [2024-12-06 23:40:56.749751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:45.223 [2024-12-06 23:40:56.749799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:45.223 [2024-12-06 23:40:56.749999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.223 23:40:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.483 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:45.483 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:45.483 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:45.483 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:45.483 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:45.483 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.483 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:45.483 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.483 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:45.483 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.483 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:45.483 23:40:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:45.483 [2024-12-06 23:40:56.990912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:45.483 /dev/nbd0 00:06:45.483 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.483 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.483 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:45.483 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:45.483 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.483 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.483 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:45.483 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:45.483 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.483 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.483 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.743 1+0 records in 00:06:45.743 1+0 records out 00:06:45.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745483 s, 5.5 MB/s 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.743 { 00:06:45.743 "nbd_device": "/dev/nbd0", 00:06:45.743 "bdev_name": "raid" 00:06:45.743 } 00:06:45.743 ]' 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.743 { 00:06:45.743 "nbd_device": "/dev/nbd0", 00:06:45.743 "bdev_name": "raid" 00:06:45.743 } 00:06:45.743 ]' 00:06:45.743 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:46.002 4096+0 records in 00:06:46.002 4096+0 records out 00:06:46.002 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0318376 s, 65.9 MB/s 00:06:46.002 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:46.261 4096+0 records in 00:06:46.261 4096+0 records out 00:06:46.261 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.225424 s, 9.3 MB/s 00:06:46.261 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:46.262 128+0 records in 00:06:46.262 128+0 records out 00:06:46.262 65536 bytes (66 kB, 64 KiB) copied, 0.00114659 s, 57.2 MB/s 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:46.262 2035+0 records in 00:06:46.262 2035+0 records out 00:06:46.262 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0130771 s, 79.7 MB/s 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:46.262 456+0 records in 00:06:46.262 456+0 records out 00:06:46.262 233472 bytes (233 kB, 228 KiB) copied, 0.00379529 s, 61.5 MB/s 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.262 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:46.522 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.522 [2024-12-06 23:40:57.940251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.522 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.522 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.522 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.522 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.522 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.522 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:46.522 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.522 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:46.522 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.522 23:40:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60269 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60269 ']' 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60269 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60269 00:06:46.782 killing process with pid 60269 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60269' 00:06:46.782 23:40:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60269 00:06:46.782 [2024-12-06 23:40:58.255414] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:46.783 [2024-12-06 23:40:58.255514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.783 [2024-12-06 23:40:58.255565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:46.783 [2024-12-06 23:40:58.255579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:46.783 23:40:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60269 00:06:47.042 [2024-12-06 23:40:58.450820] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:48.015 23:40:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:48.015 00:06:48.015 real 0m3.797s 00:06:48.015 user 0m4.394s 00:06:48.015 sys 0m0.921s 00:06:48.015 ************************************ 00:06:48.015 END TEST raid_function_test_raid0 00:06:48.015 ************************************ 00:06:48.015 23:40:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.015 23:40:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:48.275 23:40:59 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:48.275 23:40:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.275 23:40:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.275 23:40:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:48.275 ************************************ 00:06:48.275 START TEST raid_function_test_concat 00:06:48.275 ************************************ 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:48.275 Process raid pid: 60398 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60398 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60398' 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60398 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60398 ']' 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.275 23:40:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:48.275 [2024-12-06 23:40:59.698001] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:48.275 [2024-12-06 23:40:59.698210] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.535 [2024-12-06 23:40:59.873174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.535 [2024-12-06 23:40:59.983890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.794 [2024-12-06 23:41:00.180721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.794 [2024-12-06 23:41:00.180756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.053 Base_1 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.053 Base_2 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.053 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.312 [2024-12-06 23:41:00.612688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:49.312 [2024-12-06 23:41:00.614465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:49.312 [2024-12-06 23:41:00.614527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:49.312 [2024-12-06 23:41:00.614538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:49.312 [2024-12-06 23:41:00.614813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:49.312 [2024-12-06 23:41:00.614979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:49.312 [2024-12-06 23:41:00.614990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:49.312 [2024-12-06 23:41:00.615136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.312 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.312 23:41:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:49.312 23:41:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:49.312 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.312 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:49.313 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:49.313 [2024-12-06 23:41:00.848326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:49.313 /dev/nbd0 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.579 1+0 records in 00:06:49.579 1+0 records out 00:06:49.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555969 s, 7.4 MB/s 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.579 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:49.580 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.580 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:49.580 23:41:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:49.580 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.580 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:49.580 23:41:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:49.580 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:49.580 23:41:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.853 { 00:06:49.853 "nbd_device": "/dev/nbd0", 00:06:49.853 "bdev_name": "raid" 00:06:49.853 } 00:06:49.853 ]' 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.853 { 00:06:49.853 "nbd_device": "/dev/nbd0", 00:06:49.853 "bdev_name": "raid" 00:06:49.853 } 00:06:49.853 ]' 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:49.853 4096+0 records in 00:06:49.853 4096+0 records out 00:06:49.853 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0298502 s, 70.3 MB/s 00:06:49.853 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:50.140 4096+0 records in 00:06:50.140 4096+0 records out 00:06:50.140 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.190827 s, 11.0 MB/s 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:50.140 128+0 records in 00:06:50.140 128+0 records out 00:06:50.140 65536 bytes (66 kB, 64 KiB) copied, 0.00124344 s, 52.7 MB/s 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:50.140 2035+0 records in 00:06:50.140 2035+0 records out 00:06:50.140 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0134062 s, 77.7 MB/s 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:50.140 456+0 records in 00:06:50.140 456+0 records out 00:06:50.140 233472 bytes (233 kB, 228 KiB) copied, 0.00386268 s, 60.4 MB/s 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.140 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:50.400 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.400 [2024-12-06 23:41:01.775816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.400 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.400 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.400 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.400 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.400 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.400 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:50.400 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.400 23:41:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:50.400 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.400 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:50.661 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.661 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.661 23:41:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60398 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60398 ']' 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60398 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60398 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60398' 00:06:50.661 killing process with pid 60398 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60398 00:06:50.661 [2024-12-06 23:41:02.082607] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:50.661 [2024-12-06 23:41:02.082739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.661 [2024-12-06 23:41:02.082795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:50.661 [2024-12-06 23:41:02.082809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:50.661 23:41:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60398 00:06:50.921 [2024-12-06 23:41:02.281388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.861 ************************************ 00:06:51.861 23:41:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:51.861 00:06:51.861 real 0m3.759s 00:06:51.861 user 0m4.378s 00:06:51.861 sys 0m0.915s 00:06:51.861 23:41:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.861 23:41:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:51.861 END TEST raid_function_test_concat 00:06:51.861 ************************************ 00:06:52.122 23:41:03 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:52.122 23:41:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.122 23:41:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.122 23:41:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:52.122 ************************************ 00:06:52.122 START TEST raid0_resize_test 00:06:52.122 ************************************ 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60519 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60519' 00:06:52.122 Process raid pid: 60519 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60519 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60519 ']' 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.122 23:41:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.122 [2024-12-06 23:41:03.530284] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:52.122 [2024-12-06 23:41:03.530419] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.383 [2024-12-06 23:41:03.687751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.383 [2024-12-06 23:41:03.802328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.643 [2024-12-06 23:41:04.004695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.643 [2024-12-06 23:41:04.004728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.902 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.902 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:52.902 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:52.902 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.903 Base_1 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.903 Base_2 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.903 [2024-12-06 23:41:04.363799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:52.903 [2024-12-06 23:41:04.365562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:52.903 [2024-12-06 23:41:04.365708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:52.903 [2024-12-06 23:41:04.365729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:52.903 [2024-12-06 23:41:04.366017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:52.903 [2024-12-06 23:41:04.366132] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:52.903 [2024-12-06 23:41:04.366140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:52.903 [2024-12-06 23:41:04.366274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.903 [2024-12-06 23:41:04.371754] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.903 [2024-12-06 23:41:04.371831] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:52.903 true 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.903 [2024-12-06 23:41:04.383886] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.903 [2024-12-06 23:41:04.431614] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.903 [2024-12-06 23:41:04.431689] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:52.903 [2024-12-06 23:41:04.431763] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:52.903 true 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.903 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.903 [2024-12-06 23:41:04.447749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60519 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60519 ']' 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60519 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60519 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60519' 00:06:53.162 killing process with pid 60519 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60519 00:06:53.162 [2024-12-06 23:41:04.532676] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.162 [2024-12-06 23:41:04.532798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.162 [2024-12-06 23:41:04.532870] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:53.162 23:41:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60519 00:06:53.162 [2024-12-06 23:41:04.532914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:53.162 [2024-12-06 23:41:04.549924] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.099 23:41:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:54.099 00:06:54.099 real 0m2.180s 00:06:54.099 user 0m2.311s 00:06:54.099 sys 0m0.332s 00:06:54.099 23:41:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.099 23:41:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.099 ************************************ 00:06:54.099 END TEST raid0_resize_test 00:06:54.099 ************************************ 00:06:54.358 23:41:05 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:54.358 23:41:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.358 23:41:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.358 23:41:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.358 ************************************ 00:06:54.358 START TEST raid1_resize_test 00:06:54.358 ************************************ 00:06:54.358 23:41:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:54.358 23:41:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:54.358 23:41:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:54.358 23:41:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:54.358 23:41:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:54.358 23:41:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:54.358 23:41:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:54.358 23:41:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:54.359 23:41:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:54.359 Process raid pid: 60582 00:06:54.359 23:41:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60582 00:06:54.359 23:41:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.359 23:41:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60582' 00:06:54.359 23:41:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60582 00:06:54.359 23:41:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60582 ']' 00:06:54.359 23:41:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.359 23:41:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.359 23:41:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.359 23:41:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.359 23:41:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.359 [2024-12-06 23:41:05.780445] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:54.359 [2024-12-06 23:41:05.780666] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.617 [2024-12-06 23:41:05.958391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.617 [2024-12-06 23:41:06.069226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.877 [2024-12-06 23:41:06.262442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.877 [2024-12-06 23:41:06.262561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.137 Base_1 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.137 Base_2 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.137 [2024-12-06 23:41:06.624265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:55.137 [2024-12-06 23:41:06.626217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:55.137 [2024-12-06 23:41:06.626331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:55.137 [2024-12-06 23:41:06.626370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:55.137 [2024-12-06 23:41:06.626625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:55.137 [2024-12-06 23:41:06.626818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:55.137 [2024-12-06 23:41:06.626859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:55.137 [2024-12-06 23:41:06.627031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.137 [2024-12-06 23:41:06.632230] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.137 [2024-12-06 23:41:06.632297] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:55.137 true 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.137 [2024-12-06 23:41:06.644366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.137 [2024-12-06 23:41:06.688125] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.137 [2024-12-06 23:41:06.688188] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:55.137 [2024-12-06 23:41:06.688219] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:55.137 true 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.137 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.397 [2024-12-06 23:41:06.704250] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60582 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60582 ']' 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60582 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60582 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.397 killing process with pid 60582 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60582' 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60582 00:06:55.397 [2024-12-06 23:41:06.785906] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.397 [2024-12-06 23:41:06.785988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.397 23:41:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60582 00:06:55.397 [2024-12-06 23:41:06.786486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.397 [2024-12-06 23:41:06.786524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:55.397 [2024-12-06 23:41:06.803495] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:56.336 ************************************ 00:06:56.336 END TEST raid1_resize_test 00:06:56.336 ************************************ 00:06:56.336 23:41:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:56.336 00:06:56.336 real 0m2.186s 00:06:56.336 user 0m2.319s 00:06:56.336 sys 0m0.333s 00:06:56.336 23:41:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.336 23:41:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.596 23:41:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:56.596 23:41:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:56.596 23:41:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:56.596 23:41:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:56.596 23:41:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.596 23:41:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.596 ************************************ 00:06:56.596 START TEST raid_state_function_test 00:06:56.596 ************************************ 00:06:56.596 23:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:56.596 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:56.596 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:56.596 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:56.596 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:56.596 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:56.596 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:56.596 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:56.596 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:56.596 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:56.597 Process raid pid: 60639 00:06:56.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60639 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60639' 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60639 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60639 ']' 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.597 23:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.597 [2024-12-06 23:41:08.041430] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:06:56.597 [2024-12-06 23:41:08.041549] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.871 [2024-12-06 23:41:08.201011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.871 [2024-12-06 23:41:08.308950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.131 [2024-12-06 23:41:08.499899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.131 [2024-12-06 23:41:08.499942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.390 [2024-12-06 23:41:08.873888] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:57.390 [2024-12-06 23:41:08.874000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:57.390 [2024-12-06 23:41:08.874032] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:57.390 [2024-12-06 23:41:08.874057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.390 "name": "Existed_Raid", 00:06:57.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.390 "strip_size_kb": 64, 00:06:57.390 "state": "configuring", 00:06:57.390 "raid_level": "raid0", 00:06:57.390 "superblock": false, 00:06:57.390 "num_base_bdevs": 2, 00:06:57.390 "num_base_bdevs_discovered": 0, 00:06:57.390 "num_base_bdevs_operational": 2, 00:06:57.390 "base_bdevs_list": [ 00:06:57.390 { 00:06:57.390 "name": "BaseBdev1", 00:06:57.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.390 "is_configured": false, 00:06:57.390 "data_offset": 0, 00:06:57.390 "data_size": 0 00:06:57.390 }, 00:06:57.390 { 00:06:57.390 "name": "BaseBdev2", 00:06:57.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.390 "is_configured": false, 00:06:57.390 "data_offset": 0, 00:06:57.390 "data_size": 0 00:06:57.390 } 00:06:57.390 ] 00:06:57.390 }' 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.390 23:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.960 [2024-12-06 23:41:09.317038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:57.960 [2024-12-06 23:41:09.317112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.960 [2024-12-06 23:41:09.329020] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:57.960 [2024-12-06 23:41:09.329108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:57.960 [2024-12-06 23:41:09.329135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:57.960 [2024-12-06 23:41:09.329159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.960 [2024-12-06 23:41:09.374748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:57.960 BaseBdev1 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.960 [ 00:06:57.960 { 00:06:57.960 "name": "BaseBdev1", 00:06:57.960 "aliases": [ 00:06:57.960 "e2676f62-8ce8-4088-bac8-bcd6c93b1da0" 00:06:57.960 ], 00:06:57.960 "product_name": "Malloc disk", 00:06:57.960 "block_size": 512, 00:06:57.960 "num_blocks": 65536, 00:06:57.960 "uuid": "e2676f62-8ce8-4088-bac8-bcd6c93b1da0", 00:06:57.960 "assigned_rate_limits": { 00:06:57.960 "rw_ios_per_sec": 0, 00:06:57.960 "rw_mbytes_per_sec": 0, 00:06:57.960 "r_mbytes_per_sec": 0, 00:06:57.960 "w_mbytes_per_sec": 0 00:06:57.960 }, 00:06:57.960 "claimed": true, 00:06:57.960 "claim_type": "exclusive_write", 00:06:57.960 "zoned": false, 00:06:57.960 "supported_io_types": { 00:06:57.960 "read": true, 00:06:57.960 "write": true, 00:06:57.960 "unmap": true, 00:06:57.960 "flush": true, 00:06:57.960 "reset": true, 00:06:57.960 "nvme_admin": false, 00:06:57.960 "nvme_io": false, 00:06:57.960 "nvme_io_md": false, 00:06:57.960 "write_zeroes": true, 00:06:57.960 "zcopy": true, 00:06:57.960 "get_zone_info": false, 00:06:57.960 "zone_management": false, 00:06:57.960 "zone_append": false, 00:06:57.960 "compare": false, 00:06:57.960 "compare_and_write": false, 00:06:57.960 "abort": true, 00:06:57.960 "seek_hole": false, 00:06:57.960 "seek_data": false, 00:06:57.960 "copy": true, 00:06:57.960 "nvme_iov_md": false 00:06:57.960 }, 00:06:57.960 "memory_domains": [ 00:06:57.960 { 00:06:57.960 "dma_device_id": "system", 00:06:57.960 "dma_device_type": 1 00:06:57.960 }, 00:06:57.960 { 00:06:57.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.960 "dma_device_type": 2 00:06:57.960 } 00:06:57.960 ], 00:06:57.960 "driver_specific": {} 00:06:57.960 } 00:06:57.960 ] 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.960 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.960 "name": "Existed_Raid", 00:06:57.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.960 "strip_size_kb": 64, 00:06:57.960 "state": "configuring", 00:06:57.960 "raid_level": "raid0", 00:06:57.960 "superblock": false, 00:06:57.960 "num_base_bdevs": 2, 00:06:57.961 "num_base_bdevs_discovered": 1, 00:06:57.961 "num_base_bdevs_operational": 2, 00:06:57.961 "base_bdevs_list": [ 00:06:57.961 { 00:06:57.961 "name": "BaseBdev1", 00:06:57.961 "uuid": "e2676f62-8ce8-4088-bac8-bcd6c93b1da0", 00:06:57.961 "is_configured": true, 00:06:57.961 "data_offset": 0, 00:06:57.961 "data_size": 65536 00:06:57.961 }, 00:06:57.961 { 00:06:57.961 "name": "BaseBdev2", 00:06:57.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.961 "is_configured": false, 00:06:57.961 "data_offset": 0, 00:06:57.961 "data_size": 0 00:06:57.961 } 00:06:57.961 ] 00:06:57.961 }' 00:06:57.961 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.961 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.531 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:58.531 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.531 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.531 [2024-12-06 23:41:09.897906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:58.531 [2024-12-06 23:41:09.898016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:58.531 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.531 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:58.531 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.532 [2024-12-06 23:41:09.909902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:58.532 [2024-12-06 23:41:09.911810] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:58.532 [2024-12-06 23:41:09.911857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.532 "name": "Existed_Raid", 00:06:58.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.532 "strip_size_kb": 64, 00:06:58.532 "state": "configuring", 00:06:58.532 "raid_level": "raid0", 00:06:58.532 "superblock": false, 00:06:58.532 "num_base_bdevs": 2, 00:06:58.532 "num_base_bdevs_discovered": 1, 00:06:58.532 "num_base_bdevs_operational": 2, 00:06:58.532 "base_bdevs_list": [ 00:06:58.532 { 00:06:58.532 "name": "BaseBdev1", 00:06:58.532 "uuid": "e2676f62-8ce8-4088-bac8-bcd6c93b1da0", 00:06:58.532 "is_configured": true, 00:06:58.532 "data_offset": 0, 00:06:58.532 "data_size": 65536 00:06:58.532 }, 00:06:58.532 { 00:06:58.532 "name": "BaseBdev2", 00:06:58.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.532 "is_configured": false, 00:06:58.532 "data_offset": 0, 00:06:58.532 "data_size": 0 00:06:58.532 } 00:06:58.532 ] 00:06:58.532 }' 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.532 23:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.792 [2024-12-06 23:41:10.315458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:58.792 [2024-12-06 23:41:10.315579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:58.792 [2024-12-06 23:41:10.315606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:58.792 [2024-12-06 23:41:10.315935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:58.792 [2024-12-06 23:41:10.316155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:58.792 [2024-12-06 23:41:10.316202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:58.792 [2024-12-06 23:41:10.316496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.792 BaseBdev2 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.792 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.792 [ 00:06:58.792 { 00:06:58.792 "name": "BaseBdev2", 00:06:58.792 "aliases": [ 00:06:58.792 "18eafa8f-8e15-4b55-93ad-a70a7b4ea989" 00:06:58.792 ], 00:06:58.792 "product_name": "Malloc disk", 00:06:58.792 "block_size": 512, 00:06:58.792 "num_blocks": 65536, 00:06:58.792 "uuid": "18eafa8f-8e15-4b55-93ad-a70a7b4ea989", 00:06:58.792 "assigned_rate_limits": { 00:06:58.792 "rw_ios_per_sec": 0, 00:06:58.792 "rw_mbytes_per_sec": 0, 00:06:58.792 "r_mbytes_per_sec": 0, 00:06:58.792 "w_mbytes_per_sec": 0 00:06:58.792 }, 00:06:58.792 "claimed": true, 00:06:58.792 "claim_type": "exclusive_write", 00:06:58.792 "zoned": false, 00:06:58.792 "supported_io_types": { 00:06:58.792 "read": true, 00:06:58.792 "write": true, 00:06:58.792 "unmap": true, 00:06:58.792 "flush": true, 00:06:58.792 "reset": true, 00:06:58.792 "nvme_admin": false, 00:06:58.792 "nvme_io": false, 00:06:58.792 "nvme_io_md": false, 00:06:58.792 "write_zeroes": true, 00:06:58.792 "zcopy": true, 00:06:58.792 "get_zone_info": false, 00:06:58.792 "zone_management": false, 00:06:58.792 "zone_append": false, 00:06:59.052 "compare": false, 00:06:59.052 "compare_and_write": false, 00:06:59.052 "abort": true, 00:06:59.052 "seek_hole": false, 00:06:59.052 "seek_data": false, 00:06:59.052 "copy": true, 00:06:59.052 "nvme_iov_md": false 00:06:59.052 }, 00:06:59.052 "memory_domains": [ 00:06:59.052 { 00:06:59.052 "dma_device_id": "system", 00:06:59.052 "dma_device_type": 1 00:06:59.052 }, 00:06:59.052 { 00:06:59.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.052 "dma_device_type": 2 00:06:59.052 } 00:06:59.052 ], 00:06:59.052 "driver_specific": {} 00:06:59.052 } 00:06:59.052 ] 00:06:59.052 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.052 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:59.052 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:59.052 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:59.052 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:59.052 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.052 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:59.052 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.052 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.052 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.052 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.052 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.053 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.053 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.053 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.053 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.053 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.053 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.053 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.053 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.053 "name": "Existed_Raid", 00:06:59.053 "uuid": "e2bc8133-24ae-4443-ba5c-131f39adfaa4", 00:06:59.053 "strip_size_kb": 64, 00:06:59.053 "state": "online", 00:06:59.053 "raid_level": "raid0", 00:06:59.053 "superblock": false, 00:06:59.053 "num_base_bdevs": 2, 00:06:59.053 "num_base_bdevs_discovered": 2, 00:06:59.053 "num_base_bdevs_operational": 2, 00:06:59.053 "base_bdevs_list": [ 00:06:59.053 { 00:06:59.053 "name": "BaseBdev1", 00:06:59.053 "uuid": "e2676f62-8ce8-4088-bac8-bcd6c93b1da0", 00:06:59.053 "is_configured": true, 00:06:59.053 "data_offset": 0, 00:06:59.053 "data_size": 65536 00:06:59.053 }, 00:06:59.053 { 00:06:59.053 "name": "BaseBdev2", 00:06:59.053 "uuid": "18eafa8f-8e15-4b55-93ad-a70a7b4ea989", 00:06:59.053 "is_configured": true, 00:06:59.053 "data_offset": 0, 00:06:59.053 "data_size": 65536 00:06:59.053 } 00:06:59.053 ] 00:06:59.053 }' 00:06:59.053 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.053 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.312 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:59.312 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:59.312 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:59.312 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:59.312 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:59.312 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:59.312 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:59.312 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:59.312 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.312 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.312 [2024-12-06 23:41:10.871086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.572 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.572 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:59.572 "name": "Existed_Raid", 00:06:59.572 "aliases": [ 00:06:59.572 "e2bc8133-24ae-4443-ba5c-131f39adfaa4" 00:06:59.572 ], 00:06:59.572 "product_name": "Raid Volume", 00:06:59.572 "block_size": 512, 00:06:59.572 "num_blocks": 131072, 00:06:59.572 "uuid": "e2bc8133-24ae-4443-ba5c-131f39adfaa4", 00:06:59.572 "assigned_rate_limits": { 00:06:59.572 "rw_ios_per_sec": 0, 00:06:59.572 "rw_mbytes_per_sec": 0, 00:06:59.572 "r_mbytes_per_sec": 0, 00:06:59.572 "w_mbytes_per_sec": 0 00:06:59.572 }, 00:06:59.572 "claimed": false, 00:06:59.572 "zoned": false, 00:06:59.572 "supported_io_types": { 00:06:59.572 "read": true, 00:06:59.572 "write": true, 00:06:59.572 "unmap": true, 00:06:59.572 "flush": true, 00:06:59.572 "reset": true, 00:06:59.572 "nvme_admin": false, 00:06:59.572 "nvme_io": false, 00:06:59.572 "nvme_io_md": false, 00:06:59.572 "write_zeroes": true, 00:06:59.572 "zcopy": false, 00:06:59.572 "get_zone_info": false, 00:06:59.572 "zone_management": false, 00:06:59.572 "zone_append": false, 00:06:59.572 "compare": false, 00:06:59.572 "compare_and_write": false, 00:06:59.572 "abort": false, 00:06:59.572 "seek_hole": false, 00:06:59.572 "seek_data": false, 00:06:59.572 "copy": false, 00:06:59.572 "nvme_iov_md": false 00:06:59.572 }, 00:06:59.572 "memory_domains": [ 00:06:59.572 { 00:06:59.572 "dma_device_id": "system", 00:06:59.572 "dma_device_type": 1 00:06:59.572 }, 00:06:59.572 { 00:06:59.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.572 "dma_device_type": 2 00:06:59.572 }, 00:06:59.572 { 00:06:59.572 "dma_device_id": "system", 00:06:59.572 "dma_device_type": 1 00:06:59.572 }, 00:06:59.572 { 00:06:59.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.572 "dma_device_type": 2 00:06:59.572 } 00:06:59.572 ], 00:06:59.572 "driver_specific": { 00:06:59.572 "raid": { 00:06:59.572 "uuid": "e2bc8133-24ae-4443-ba5c-131f39adfaa4", 00:06:59.572 "strip_size_kb": 64, 00:06:59.572 "state": "online", 00:06:59.572 "raid_level": "raid0", 00:06:59.572 "superblock": false, 00:06:59.572 "num_base_bdevs": 2, 00:06:59.572 "num_base_bdevs_discovered": 2, 00:06:59.572 "num_base_bdevs_operational": 2, 00:06:59.572 "base_bdevs_list": [ 00:06:59.572 { 00:06:59.572 "name": "BaseBdev1", 00:06:59.572 "uuid": "e2676f62-8ce8-4088-bac8-bcd6c93b1da0", 00:06:59.572 "is_configured": true, 00:06:59.572 "data_offset": 0, 00:06:59.572 "data_size": 65536 00:06:59.572 }, 00:06:59.572 { 00:06:59.572 "name": "BaseBdev2", 00:06:59.572 "uuid": "18eafa8f-8e15-4b55-93ad-a70a7b4ea989", 00:06:59.572 "is_configured": true, 00:06:59.572 "data_offset": 0, 00:06:59.572 "data_size": 65536 00:06:59.572 } 00:06:59.572 ] 00:06:59.572 } 00:06:59.572 } 00:06:59.572 }' 00:06:59.572 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:59.572 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:59.572 BaseBdev2' 00:06:59.572 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:59.572 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:59.572 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:59.572 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:59.572 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.572 23:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.572 23:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.572 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.572 [2024-12-06 23:41:11.098806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:59.572 [2024-12-06 23:41:11.098838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:59.572 [2024-12-06 23:41:11.098886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.832 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.833 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.833 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.833 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.833 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.833 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.833 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.833 "name": "Existed_Raid", 00:06:59.833 "uuid": "e2bc8133-24ae-4443-ba5c-131f39adfaa4", 00:06:59.833 "strip_size_kb": 64, 00:06:59.833 "state": "offline", 00:06:59.833 "raid_level": "raid0", 00:06:59.833 "superblock": false, 00:06:59.833 "num_base_bdevs": 2, 00:06:59.833 "num_base_bdevs_discovered": 1, 00:06:59.833 "num_base_bdevs_operational": 1, 00:06:59.833 "base_bdevs_list": [ 00:06:59.833 { 00:06:59.833 "name": null, 00:06:59.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.833 "is_configured": false, 00:06:59.833 "data_offset": 0, 00:06:59.833 "data_size": 65536 00:06:59.833 }, 00:06:59.833 { 00:06:59.833 "name": "BaseBdev2", 00:06:59.833 "uuid": "18eafa8f-8e15-4b55-93ad-a70a7b4ea989", 00:06:59.833 "is_configured": true, 00:06:59.833 "data_offset": 0, 00:06:59.833 "data_size": 65536 00:06:59.833 } 00:06:59.833 ] 00:06:59.833 }' 00:06:59.833 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.833 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.092 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:00.092 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.353 [2024-12-06 23:41:11.705892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:00.353 [2024-12-06 23:41:11.706006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60639 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60639 ']' 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60639 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60639 00:07:00.353 killing process with pid 60639 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60639' 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60639 00:07:00.353 [2024-12-06 23:41:11.888543] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:00.353 23:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60639 00:07:00.353 [2024-12-06 23:41:11.905159] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:01.730 00:07:01.730 real 0m5.062s 00:07:01.730 user 0m7.311s 00:07:01.730 sys 0m0.826s 00:07:01.730 ************************************ 00:07:01.730 END TEST raid_state_function_test 00:07:01.730 ************************************ 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.730 23:41:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:01.730 23:41:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:01.730 23:41:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.730 23:41:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:01.730 ************************************ 00:07:01.730 START TEST raid_state_function_test_sb 00:07:01.730 ************************************ 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60891 00:07:01.730 Process raid pid: 60891 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60891' 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60891 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60891 ']' 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.730 23:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.730 [2024-12-06 23:41:13.173008] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:07:01.730 [2024-12-06 23:41:13.173220] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.989 [2024-12-06 23:41:13.346431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.989 [2024-12-06 23:41:13.456146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.248 [2024-12-06 23:41:13.653756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.248 [2024-12-06 23:41:13.653871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.507 23:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.507 23:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:02.507 23:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.507 23:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.507 23:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.507 [2024-12-06 23:41:14.006968] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:02.507 [2024-12-06 23:41:14.007102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:02.507 [2024-12-06 23:41:14.007118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.507 [2024-12-06 23:41:14.007130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.507 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.507 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:02.507 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.507 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.507 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.507 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.507 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.507 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.507 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.507 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.507 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.508 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.508 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.508 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.508 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.508 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.508 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.508 "name": "Existed_Raid", 00:07:02.508 "uuid": "91e8067c-f13e-490c-935f-d3ac6c8743c7", 00:07:02.508 "strip_size_kb": 64, 00:07:02.508 "state": "configuring", 00:07:02.508 "raid_level": "raid0", 00:07:02.508 "superblock": true, 00:07:02.508 "num_base_bdevs": 2, 00:07:02.508 "num_base_bdevs_discovered": 0, 00:07:02.508 "num_base_bdevs_operational": 2, 00:07:02.508 "base_bdevs_list": [ 00:07:02.508 { 00:07:02.508 "name": "BaseBdev1", 00:07:02.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.508 "is_configured": false, 00:07:02.508 "data_offset": 0, 00:07:02.508 "data_size": 0 00:07:02.508 }, 00:07:02.508 { 00:07:02.508 "name": "BaseBdev2", 00:07:02.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.508 "is_configured": false, 00:07:02.508 "data_offset": 0, 00:07:02.508 "data_size": 0 00:07:02.508 } 00:07:02.508 ] 00:07:02.508 }' 00:07:02.508 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.508 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.075 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:03.075 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.075 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.075 [2024-12-06 23:41:14.458474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:03.075 [2024-12-06 23:41:14.458573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:03.075 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.075 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:03.075 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.075 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.075 [2024-12-06 23:41:14.466446] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:03.075 [2024-12-06 23:41:14.466524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:03.075 [2024-12-06 23:41:14.466550] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.075 [2024-12-06 23:41:14.466575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.075 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.075 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:03.075 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.075 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.076 [2024-12-06 23:41:14.507828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:03.076 BaseBdev1 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.076 [ 00:07:03.076 { 00:07:03.076 "name": "BaseBdev1", 00:07:03.076 "aliases": [ 00:07:03.076 "74661eb4-d4ec-497d-a515-d91345643719" 00:07:03.076 ], 00:07:03.076 "product_name": "Malloc disk", 00:07:03.076 "block_size": 512, 00:07:03.076 "num_blocks": 65536, 00:07:03.076 "uuid": "74661eb4-d4ec-497d-a515-d91345643719", 00:07:03.076 "assigned_rate_limits": { 00:07:03.076 "rw_ios_per_sec": 0, 00:07:03.076 "rw_mbytes_per_sec": 0, 00:07:03.076 "r_mbytes_per_sec": 0, 00:07:03.076 "w_mbytes_per_sec": 0 00:07:03.076 }, 00:07:03.076 "claimed": true, 00:07:03.076 "claim_type": "exclusive_write", 00:07:03.076 "zoned": false, 00:07:03.076 "supported_io_types": { 00:07:03.076 "read": true, 00:07:03.076 "write": true, 00:07:03.076 "unmap": true, 00:07:03.076 "flush": true, 00:07:03.076 "reset": true, 00:07:03.076 "nvme_admin": false, 00:07:03.076 "nvme_io": false, 00:07:03.076 "nvme_io_md": false, 00:07:03.076 "write_zeroes": true, 00:07:03.076 "zcopy": true, 00:07:03.076 "get_zone_info": false, 00:07:03.076 "zone_management": false, 00:07:03.076 "zone_append": false, 00:07:03.076 "compare": false, 00:07:03.076 "compare_and_write": false, 00:07:03.076 "abort": true, 00:07:03.076 "seek_hole": false, 00:07:03.076 "seek_data": false, 00:07:03.076 "copy": true, 00:07:03.076 "nvme_iov_md": false 00:07:03.076 }, 00:07:03.076 "memory_domains": [ 00:07:03.076 { 00:07:03.076 "dma_device_id": "system", 00:07:03.076 "dma_device_type": 1 00:07:03.076 }, 00:07:03.076 { 00:07:03.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.076 "dma_device_type": 2 00:07:03.076 } 00:07:03.076 ], 00:07:03.076 "driver_specific": {} 00:07:03.076 } 00:07:03.076 ] 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.076 "name": "Existed_Raid", 00:07:03.076 "uuid": "05068985-3b13-4c21-a3f7-2345dbde22a7", 00:07:03.076 "strip_size_kb": 64, 00:07:03.076 "state": "configuring", 00:07:03.076 "raid_level": "raid0", 00:07:03.076 "superblock": true, 00:07:03.076 "num_base_bdevs": 2, 00:07:03.076 "num_base_bdevs_discovered": 1, 00:07:03.076 "num_base_bdevs_operational": 2, 00:07:03.076 "base_bdevs_list": [ 00:07:03.076 { 00:07:03.076 "name": "BaseBdev1", 00:07:03.076 "uuid": "74661eb4-d4ec-497d-a515-d91345643719", 00:07:03.076 "is_configured": true, 00:07:03.076 "data_offset": 2048, 00:07:03.076 "data_size": 63488 00:07:03.076 }, 00:07:03.076 { 00:07:03.076 "name": "BaseBdev2", 00:07:03.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.076 "is_configured": false, 00:07:03.076 "data_offset": 0, 00:07:03.076 "data_size": 0 00:07:03.076 } 00:07:03.076 ] 00:07:03.076 }' 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.076 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.643 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:03.643 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.643 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.643 [2024-12-06 23:41:14.995067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:03.643 [2024-12-06 23:41:14.995190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:03.644 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.644 23:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:03.644 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.644 23:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.644 [2024-12-06 23:41:15.007069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:03.644 [2024-12-06 23:41:15.008874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.644 [2024-12-06 23:41:15.008917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.644 "name": "Existed_Raid", 00:07:03.644 "uuid": "fba80845-cecf-4266-9393-ceabe0dbaec4", 00:07:03.644 "strip_size_kb": 64, 00:07:03.644 "state": "configuring", 00:07:03.644 "raid_level": "raid0", 00:07:03.644 "superblock": true, 00:07:03.644 "num_base_bdevs": 2, 00:07:03.644 "num_base_bdevs_discovered": 1, 00:07:03.644 "num_base_bdevs_operational": 2, 00:07:03.644 "base_bdevs_list": [ 00:07:03.644 { 00:07:03.644 "name": "BaseBdev1", 00:07:03.644 "uuid": "74661eb4-d4ec-497d-a515-d91345643719", 00:07:03.644 "is_configured": true, 00:07:03.644 "data_offset": 2048, 00:07:03.644 "data_size": 63488 00:07:03.644 }, 00:07:03.644 { 00:07:03.644 "name": "BaseBdev2", 00:07:03.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.644 "is_configured": false, 00:07:03.644 "data_offset": 0, 00:07:03.644 "data_size": 0 00:07:03.644 } 00:07:03.644 ] 00:07:03.644 }' 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.644 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.901 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:03.901 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.901 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.159 [2024-12-06 23:41:15.495215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:04.159 [2024-12-06 23:41:15.495587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:04.159 [2024-12-06 23:41:15.495639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:04.159 [2024-12-06 23:41:15.495933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:04.159 [2024-12-06 23:41:15.496141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:04.159 [2024-12-06 23:41:15.496189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:04.159 BaseBdev2 00:07:04.159 [2024-12-06 23:41:15.496376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.159 [ 00:07:04.159 { 00:07:04.159 "name": "BaseBdev2", 00:07:04.159 "aliases": [ 00:07:04.159 "9b7a5659-491d-4db2-b0ff-a387a06c96a9" 00:07:04.159 ], 00:07:04.159 "product_name": "Malloc disk", 00:07:04.159 "block_size": 512, 00:07:04.159 "num_blocks": 65536, 00:07:04.159 "uuid": "9b7a5659-491d-4db2-b0ff-a387a06c96a9", 00:07:04.159 "assigned_rate_limits": { 00:07:04.159 "rw_ios_per_sec": 0, 00:07:04.159 "rw_mbytes_per_sec": 0, 00:07:04.159 "r_mbytes_per_sec": 0, 00:07:04.159 "w_mbytes_per_sec": 0 00:07:04.159 }, 00:07:04.159 "claimed": true, 00:07:04.159 "claim_type": "exclusive_write", 00:07:04.159 "zoned": false, 00:07:04.159 "supported_io_types": { 00:07:04.159 "read": true, 00:07:04.159 "write": true, 00:07:04.159 "unmap": true, 00:07:04.159 "flush": true, 00:07:04.159 "reset": true, 00:07:04.159 "nvme_admin": false, 00:07:04.159 "nvme_io": false, 00:07:04.159 "nvme_io_md": false, 00:07:04.159 "write_zeroes": true, 00:07:04.159 "zcopy": true, 00:07:04.159 "get_zone_info": false, 00:07:04.159 "zone_management": false, 00:07:04.159 "zone_append": false, 00:07:04.159 "compare": false, 00:07:04.159 "compare_and_write": false, 00:07:04.159 "abort": true, 00:07:04.159 "seek_hole": false, 00:07:04.159 "seek_data": false, 00:07:04.159 "copy": true, 00:07:04.159 "nvme_iov_md": false 00:07:04.159 }, 00:07:04.159 "memory_domains": [ 00:07:04.159 { 00:07:04.159 "dma_device_id": "system", 00:07:04.159 "dma_device_type": 1 00:07:04.159 }, 00:07:04.159 { 00:07:04.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.159 "dma_device_type": 2 00:07:04.159 } 00:07:04.159 ], 00:07:04.159 "driver_specific": {} 00:07:04.159 } 00:07:04.159 ] 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:04.159 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.160 "name": "Existed_Raid", 00:07:04.160 "uuid": "fba80845-cecf-4266-9393-ceabe0dbaec4", 00:07:04.160 "strip_size_kb": 64, 00:07:04.160 "state": "online", 00:07:04.160 "raid_level": "raid0", 00:07:04.160 "superblock": true, 00:07:04.160 "num_base_bdevs": 2, 00:07:04.160 "num_base_bdevs_discovered": 2, 00:07:04.160 "num_base_bdevs_operational": 2, 00:07:04.160 "base_bdevs_list": [ 00:07:04.160 { 00:07:04.160 "name": "BaseBdev1", 00:07:04.160 "uuid": "74661eb4-d4ec-497d-a515-d91345643719", 00:07:04.160 "is_configured": true, 00:07:04.160 "data_offset": 2048, 00:07:04.160 "data_size": 63488 00:07:04.160 }, 00:07:04.160 { 00:07:04.160 "name": "BaseBdev2", 00:07:04.160 "uuid": "9b7a5659-491d-4db2-b0ff-a387a06c96a9", 00:07:04.160 "is_configured": true, 00:07:04.160 "data_offset": 2048, 00:07:04.160 "data_size": 63488 00:07:04.160 } 00:07:04.160 ] 00:07:04.160 }' 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.160 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.728 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:04.728 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:04.728 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:04.728 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:04.728 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:04.728 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:04.728 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:04.728 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.728 23:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.728 23:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:04.728 [2024-12-06 23:41:15.987094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:04.728 "name": "Existed_Raid", 00:07:04.728 "aliases": [ 00:07:04.728 "fba80845-cecf-4266-9393-ceabe0dbaec4" 00:07:04.728 ], 00:07:04.728 "product_name": "Raid Volume", 00:07:04.728 "block_size": 512, 00:07:04.728 "num_blocks": 126976, 00:07:04.728 "uuid": "fba80845-cecf-4266-9393-ceabe0dbaec4", 00:07:04.728 "assigned_rate_limits": { 00:07:04.728 "rw_ios_per_sec": 0, 00:07:04.728 "rw_mbytes_per_sec": 0, 00:07:04.728 "r_mbytes_per_sec": 0, 00:07:04.728 "w_mbytes_per_sec": 0 00:07:04.728 }, 00:07:04.728 "claimed": false, 00:07:04.728 "zoned": false, 00:07:04.728 "supported_io_types": { 00:07:04.728 "read": true, 00:07:04.728 "write": true, 00:07:04.728 "unmap": true, 00:07:04.728 "flush": true, 00:07:04.728 "reset": true, 00:07:04.728 "nvme_admin": false, 00:07:04.728 "nvme_io": false, 00:07:04.728 "nvme_io_md": false, 00:07:04.728 "write_zeroes": true, 00:07:04.728 "zcopy": false, 00:07:04.728 "get_zone_info": false, 00:07:04.728 "zone_management": false, 00:07:04.728 "zone_append": false, 00:07:04.728 "compare": false, 00:07:04.728 "compare_and_write": false, 00:07:04.728 "abort": false, 00:07:04.728 "seek_hole": false, 00:07:04.728 "seek_data": false, 00:07:04.728 "copy": false, 00:07:04.728 "nvme_iov_md": false 00:07:04.728 }, 00:07:04.728 "memory_domains": [ 00:07:04.728 { 00:07:04.728 "dma_device_id": "system", 00:07:04.728 "dma_device_type": 1 00:07:04.728 }, 00:07:04.728 { 00:07:04.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.728 "dma_device_type": 2 00:07:04.728 }, 00:07:04.728 { 00:07:04.728 "dma_device_id": "system", 00:07:04.728 "dma_device_type": 1 00:07:04.728 }, 00:07:04.728 { 00:07:04.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.728 "dma_device_type": 2 00:07:04.728 } 00:07:04.728 ], 00:07:04.728 "driver_specific": { 00:07:04.728 "raid": { 00:07:04.728 "uuid": "fba80845-cecf-4266-9393-ceabe0dbaec4", 00:07:04.728 "strip_size_kb": 64, 00:07:04.728 "state": "online", 00:07:04.728 "raid_level": "raid0", 00:07:04.728 "superblock": true, 00:07:04.728 "num_base_bdevs": 2, 00:07:04.728 "num_base_bdevs_discovered": 2, 00:07:04.728 "num_base_bdevs_operational": 2, 00:07:04.728 "base_bdevs_list": [ 00:07:04.728 { 00:07:04.728 "name": "BaseBdev1", 00:07:04.728 "uuid": "74661eb4-d4ec-497d-a515-d91345643719", 00:07:04.728 "is_configured": true, 00:07:04.728 "data_offset": 2048, 00:07:04.728 "data_size": 63488 00:07:04.728 }, 00:07:04.728 { 00:07:04.728 "name": "BaseBdev2", 00:07:04.728 "uuid": "9b7a5659-491d-4db2-b0ff-a387a06c96a9", 00:07:04.728 "is_configured": true, 00:07:04.728 "data_offset": 2048, 00:07:04.728 "data_size": 63488 00:07:04.728 } 00:07:04.728 ] 00:07:04.728 } 00:07:04.728 } 00:07:04.728 }' 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:04.728 BaseBdev2' 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.728 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.728 [2024-12-06 23:41:16.182807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:04.729 [2024-12-06 23:41:16.182881] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:04.729 [2024-12-06 23:41:16.182955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.729 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.988 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.988 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.988 "name": "Existed_Raid", 00:07:04.988 "uuid": "fba80845-cecf-4266-9393-ceabe0dbaec4", 00:07:04.988 "strip_size_kb": 64, 00:07:04.988 "state": "offline", 00:07:04.988 "raid_level": "raid0", 00:07:04.988 "superblock": true, 00:07:04.988 "num_base_bdevs": 2, 00:07:04.988 "num_base_bdevs_discovered": 1, 00:07:04.988 "num_base_bdevs_operational": 1, 00:07:04.988 "base_bdevs_list": [ 00:07:04.988 { 00:07:04.988 "name": null, 00:07:04.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.988 "is_configured": false, 00:07:04.988 "data_offset": 0, 00:07:04.988 "data_size": 63488 00:07:04.988 }, 00:07:04.988 { 00:07:04.988 "name": "BaseBdev2", 00:07:04.988 "uuid": "9b7a5659-491d-4db2-b0ff-a387a06c96a9", 00:07:04.988 "is_configured": true, 00:07:04.988 "data_offset": 2048, 00:07:04.988 "data_size": 63488 00:07:04.988 } 00:07:04.988 ] 00:07:04.988 }' 00:07:04.988 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.988 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.246 [2024-12-06 23:41:16.706799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:05.246 [2024-12-06 23:41:16.706854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.246 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60891 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60891 ']' 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60891 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60891 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.504 killing process with pid 60891 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60891' 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60891 00:07:05.504 [2024-12-06 23:41:16.892205] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.504 23:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60891 00:07:05.504 [2024-12-06 23:41:16.908773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.439 23:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:06.439 00:07:06.439 real 0m4.923s 00:07:06.439 user 0m7.117s 00:07:06.439 sys 0m0.748s 00:07:06.439 23:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.439 23:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.439 ************************************ 00:07:06.439 END TEST raid_state_function_test_sb 00:07:06.439 ************************************ 00:07:06.698 23:41:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:06.698 23:41:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:06.698 23:41:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.698 23:41:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.698 ************************************ 00:07:06.698 START TEST raid_superblock_test 00:07:06.698 ************************************ 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61139 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61139 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61139 ']' 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.698 23:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.698 [2024-12-06 23:41:18.159049] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:07:06.698 [2024-12-06 23:41:18.159197] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61139 ] 00:07:06.960 [2024-12-06 23:41:18.330555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.960 [2024-12-06 23:41:18.439941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.218 [2024-12-06 23:41:18.632781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.218 [2024-12-06 23:41:18.632840] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.476 23:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.476 23:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:07.476 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:07.477 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:07.477 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:07.477 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:07.477 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:07.477 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:07.477 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:07.477 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:07.477 23:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:07.477 23:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.477 23:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.477 malloc1 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.477 [2024-12-06 23:41:19.025387] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:07.477 [2024-12-06 23:41:19.025447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.477 [2024-12-06 23:41:19.025484] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:07.477 [2024-12-06 23:41:19.025493] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.477 [2024-12-06 23:41:19.027478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.477 [2024-12-06 23:41:19.027520] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:07.477 pt1 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.477 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.745 malloc2 00:07:07.745 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.745 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:07.745 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.745 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.745 [2024-12-06 23:41:19.080325] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:07.745 [2024-12-06 23:41:19.080379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.746 [2024-12-06 23:41:19.080403] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:07.746 [2024-12-06 23:41:19.080412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.746 [2024-12-06 23:41:19.082471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.746 [2024-12-06 23:41:19.082507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:07.746 pt2 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.746 [2024-12-06 23:41:19.092367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:07.746 [2024-12-06 23:41:19.094080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:07.746 [2024-12-06 23:41:19.094250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:07.746 [2024-12-06 23:41:19.094270] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:07.746 [2024-12-06 23:41:19.094499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:07.746 [2024-12-06 23:41:19.094648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:07.746 [2024-12-06 23:41:19.094679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:07.746 [2024-12-06 23:41:19.094827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.746 "name": "raid_bdev1", 00:07:07.746 "uuid": "65c3b154-7069-49e9-bbb2-0380d1e5b6b5", 00:07:07.746 "strip_size_kb": 64, 00:07:07.746 "state": "online", 00:07:07.746 "raid_level": "raid0", 00:07:07.746 "superblock": true, 00:07:07.746 "num_base_bdevs": 2, 00:07:07.746 "num_base_bdevs_discovered": 2, 00:07:07.746 "num_base_bdevs_operational": 2, 00:07:07.746 "base_bdevs_list": [ 00:07:07.746 { 00:07:07.746 "name": "pt1", 00:07:07.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:07.746 "is_configured": true, 00:07:07.746 "data_offset": 2048, 00:07:07.746 "data_size": 63488 00:07:07.746 }, 00:07:07.746 { 00:07:07.746 "name": "pt2", 00:07:07.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:07.746 "is_configured": true, 00:07:07.746 "data_offset": 2048, 00:07:07.746 "data_size": 63488 00:07:07.746 } 00:07:07.746 ] 00:07:07.746 }' 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.746 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.026 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:08.026 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:08.026 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:08.026 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:08.026 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:08.026 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:08.026 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.026 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.026 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.026 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:08.026 [2024-12-06 23:41:19.523904] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.026 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.026 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:08.026 "name": "raid_bdev1", 00:07:08.026 "aliases": [ 00:07:08.026 "65c3b154-7069-49e9-bbb2-0380d1e5b6b5" 00:07:08.026 ], 00:07:08.026 "product_name": "Raid Volume", 00:07:08.026 "block_size": 512, 00:07:08.026 "num_blocks": 126976, 00:07:08.026 "uuid": "65c3b154-7069-49e9-bbb2-0380d1e5b6b5", 00:07:08.026 "assigned_rate_limits": { 00:07:08.026 "rw_ios_per_sec": 0, 00:07:08.026 "rw_mbytes_per_sec": 0, 00:07:08.026 "r_mbytes_per_sec": 0, 00:07:08.026 "w_mbytes_per_sec": 0 00:07:08.026 }, 00:07:08.026 "claimed": false, 00:07:08.026 "zoned": false, 00:07:08.026 "supported_io_types": { 00:07:08.026 "read": true, 00:07:08.026 "write": true, 00:07:08.026 "unmap": true, 00:07:08.026 "flush": true, 00:07:08.026 "reset": true, 00:07:08.026 "nvme_admin": false, 00:07:08.026 "nvme_io": false, 00:07:08.026 "nvme_io_md": false, 00:07:08.026 "write_zeroes": true, 00:07:08.026 "zcopy": false, 00:07:08.026 "get_zone_info": false, 00:07:08.026 "zone_management": false, 00:07:08.026 "zone_append": false, 00:07:08.027 "compare": false, 00:07:08.027 "compare_and_write": false, 00:07:08.027 "abort": false, 00:07:08.027 "seek_hole": false, 00:07:08.027 "seek_data": false, 00:07:08.027 "copy": false, 00:07:08.027 "nvme_iov_md": false 00:07:08.027 }, 00:07:08.027 "memory_domains": [ 00:07:08.027 { 00:07:08.027 "dma_device_id": "system", 00:07:08.027 "dma_device_type": 1 00:07:08.027 }, 00:07:08.027 { 00:07:08.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.027 "dma_device_type": 2 00:07:08.027 }, 00:07:08.027 { 00:07:08.027 "dma_device_id": "system", 00:07:08.027 "dma_device_type": 1 00:07:08.027 }, 00:07:08.027 { 00:07:08.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.027 "dma_device_type": 2 00:07:08.027 } 00:07:08.027 ], 00:07:08.027 "driver_specific": { 00:07:08.027 "raid": { 00:07:08.027 "uuid": "65c3b154-7069-49e9-bbb2-0380d1e5b6b5", 00:07:08.027 "strip_size_kb": 64, 00:07:08.027 "state": "online", 00:07:08.027 "raid_level": "raid0", 00:07:08.027 "superblock": true, 00:07:08.027 "num_base_bdevs": 2, 00:07:08.027 "num_base_bdevs_discovered": 2, 00:07:08.027 "num_base_bdevs_operational": 2, 00:07:08.027 "base_bdevs_list": [ 00:07:08.027 { 00:07:08.027 "name": "pt1", 00:07:08.027 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.027 "is_configured": true, 00:07:08.027 "data_offset": 2048, 00:07:08.027 "data_size": 63488 00:07:08.027 }, 00:07:08.027 { 00:07:08.027 "name": "pt2", 00:07:08.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.027 "is_configured": true, 00:07:08.027 "data_offset": 2048, 00:07:08.027 "data_size": 63488 00:07:08.027 } 00:07:08.027 ] 00:07:08.027 } 00:07:08.027 } 00:07:08.027 }' 00:07:08.027 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:08.304 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:08.304 pt2' 00:07:08.304 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.304 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:08.304 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.304 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:08.304 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.305 [2024-12-06 23:41:19.751462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=65c3b154-7069-49e9-bbb2-0380d1e5b6b5 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 65c3b154-7069-49e9-bbb2-0380d1e5b6b5 ']' 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.305 [2024-12-06 23:41:19.779125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.305 [2024-12-06 23:41:19.779155] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.305 [2024-12-06 23:41:19.779239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.305 [2024-12-06 23:41:19.779289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.305 [2024-12-06 23:41:19.779304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:08.305 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.564 [2024-12-06 23:41:19.903003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:08.564 [2024-12-06 23:41:19.905389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:08.564 [2024-12-06 23:41:19.905473] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:08.564 [2024-12-06 23:41:19.905530] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:08.564 [2024-12-06 23:41:19.905546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.564 [2024-12-06 23:41:19.905560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:08.564 request: 00:07:08.564 { 00:07:08.564 "name": "raid_bdev1", 00:07:08.564 "raid_level": "raid0", 00:07:08.564 "base_bdevs": [ 00:07:08.564 "malloc1", 00:07:08.564 "malloc2" 00:07:08.564 ], 00:07:08.564 "strip_size_kb": 64, 00:07:08.564 "superblock": false, 00:07:08.564 "method": "bdev_raid_create", 00:07:08.564 "req_id": 1 00:07:08.564 } 00:07:08.564 Got JSON-RPC error response 00:07:08.564 response: 00:07:08.564 { 00:07:08.564 "code": -17, 00:07:08.564 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:08.564 } 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.564 [2024-12-06 23:41:19.970852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:08.564 [2024-12-06 23:41:19.970915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.564 [2024-12-06 23:41:19.970935] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:08.564 [2024-12-06 23:41:19.970947] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.564 [2024-12-06 23:41:19.973447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.564 [2024-12-06 23:41:19.973485] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:08.564 [2024-12-06 23:41:19.973570] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:08.564 [2024-12-06 23:41:19.973634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:08.564 pt1 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.564 23:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.564 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.564 "name": "raid_bdev1", 00:07:08.564 "uuid": "65c3b154-7069-49e9-bbb2-0380d1e5b6b5", 00:07:08.564 "strip_size_kb": 64, 00:07:08.564 "state": "configuring", 00:07:08.564 "raid_level": "raid0", 00:07:08.564 "superblock": true, 00:07:08.564 "num_base_bdevs": 2, 00:07:08.564 "num_base_bdevs_discovered": 1, 00:07:08.564 "num_base_bdevs_operational": 2, 00:07:08.564 "base_bdevs_list": [ 00:07:08.564 { 00:07:08.564 "name": "pt1", 00:07:08.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.564 "is_configured": true, 00:07:08.564 "data_offset": 2048, 00:07:08.564 "data_size": 63488 00:07:08.564 }, 00:07:08.564 { 00:07:08.564 "name": null, 00:07:08.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.564 "is_configured": false, 00:07:08.564 "data_offset": 2048, 00:07:08.564 "data_size": 63488 00:07:08.564 } 00:07:08.564 ] 00:07:08.564 }' 00:07:08.564 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.564 23:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.130 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:09.130 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:09.130 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:09.130 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:09.130 23:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.130 23:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.130 [2024-12-06 23:41:20.426876] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:09.130 [2024-12-06 23:41:20.426970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.130 [2024-12-06 23:41:20.426998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:09.130 [2024-12-06 23:41:20.427009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.130 [2024-12-06 23:41:20.427521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.130 [2024-12-06 23:41:20.427553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:09.130 [2024-12-06 23:41:20.427649] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:09.130 [2024-12-06 23:41:20.427696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:09.130 [2024-12-06 23:41:20.427838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:09.130 [2024-12-06 23:41:20.427856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.130 [2024-12-06 23:41:20.428087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:09.131 [2024-12-06 23:41:20.428228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:09.131 [2024-12-06 23:41:20.428239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:09.131 [2024-12-06 23:41:20.428379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.131 pt2 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.131 "name": "raid_bdev1", 00:07:09.131 "uuid": "65c3b154-7069-49e9-bbb2-0380d1e5b6b5", 00:07:09.131 "strip_size_kb": 64, 00:07:09.131 "state": "online", 00:07:09.131 "raid_level": "raid0", 00:07:09.131 "superblock": true, 00:07:09.131 "num_base_bdevs": 2, 00:07:09.131 "num_base_bdevs_discovered": 2, 00:07:09.131 "num_base_bdevs_operational": 2, 00:07:09.131 "base_bdevs_list": [ 00:07:09.131 { 00:07:09.131 "name": "pt1", 00:07:09.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.131 "is_configured": true, 00:07:09.131 "data_offset": 2048, 00:07:09.131 "data_size": 63488 00:07:09.131 }, 00:07:09.131 { 00:07:09.131 "name": "pt2", 00:07:09.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.131 "is_configured": true, 00:07:09.131 "data_offset": 2048, 00:07:09.131 "data_size": 63488 00:07:09.131 } 00:07:09.131 ] 00:07:09.131 }' 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.131 23:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.389 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:09.389 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:09.389 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:09.389 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:09.389 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:09.389 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:09.389 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.389 23:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.389 23:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.389 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:09.389 [2024-12-06 23:41:20.923084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.389 23:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.648 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:09.648 "name": "raid_bdev1", 00:07:09.648 "aliases": [ 00:07:09.648 "65c3b154-7069-49e9-bbb2-0380d1e5b6b5" 00:07:09.648 ], 00:07:09.648 "product_name": "Raid Volume", 00:07:09.648 "block_size": 512, 00:07:09.648 "num_blocks": 126976, 00:07:09.648 "uuid": "65c3b154-7069-49e9-bbb2-0380d1e5b6b5", 00:07:09.648 "assigned_rate_limits": { 00:07:09.648 "rw_ios_per_sec": 0, 00:07:09.648 "rw_mbytes_per_sec": 0, 00:07:09.648 "r_mbytes_per_sec": 0, 00:07:09.648 "w_mbytes_per_sec": 0 00:07:09.648 }, 00:07:09.648 "claimed": false, 00:07:09.649 "zoned": false, 00:07:09.649 "supported_io_types": { 00:07:09.649 "read": true, 00:07:09.649 "write": true, 00:07:09.649 "unmap": true, 00:07:09.649 "flush": true, 00:07:09.649 "reset": true, 00:07:09.649 "nvme_admin": false, 00:07:09.649 "nvme_io": false, 00:07:09.649 "nvme_io_md": false, 00:07:09.649 "write_zeroes": true, 00:07:09.649 "zcopy": false, 00:07:09.649 "get_zone_info": false, 00:07:09.649 "zone_management": false, 00:07:09.649 "zone_append": false, 00:07:09.649 "compare": false, 00:07:09.649 "compare_and_write": false, 00:07:09.649 "abort": false, 00:07:09.649 "seek_hole": false, 00:07:09.649 "seek_data": false, 00:07:09.649 "copy": false, 00:07:09.649 "nvme_iov_md": false 00:07:09.649 }, 00:07:09.649 "memory_domains": [ 00:07:09.649 { 00:07:09.649 "dma_device_id": "system", 00:07:09.649 "dma_device_type": 1 00:07:09.649 }, 00:07:09.649 { 00:07:09.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.649 "dma_device_type": 2 00:07:09.649 }, 00:07:09.649 { 00:07:09.649 "dma_device_id": "system", 00:07:09.649 "dma_device_type": 1 00:07:09.649 }, 00:07:09.649 { 00:07:09.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.649 "dma_device_type": 2 00:07:09.649 } 00:07:09.649 ], 00:07:09.649 "driver_specific": { 00:07:09.649 "raid": { 00:07:09.649 "uuid": "65c3b154-7069-49e9-bbb2-0380d1e5b6b5", 00:07:09.649 "strip_size_kb": 64, 00:07:09.649 "state": "online", 00:07:09.649 "raid_level": "raid0", 00:07:09.649 "superblock": true, 00:07:09.649 "num_base_bdevs": 2, 00:07:09.649 "num_base_bdevs_discovered": 2, 00:07:09.649 "num_base_bdevs_operational": 2, 00:07:09.649 "base_bdevs_list": [ 00:07:09.649 { 00:07:09.649 "name": "pt1", 00:07:09.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.649 "is_configured": true, 00:07:09.649 "data_offset": 2048, 00:07:09.649 "data_size": 63488 00:07:09.649 }, 00:07:09.649 { 00:07:09.649 "name": "pt2", 00:07:09.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.649 "is_configured": true, 00:07:09.649 "data_offset": 2048, 00:07:09.649 "data_size": 63488 00:07:09.649 } 00:07:09.649 ] 00:07:09.649 } 00:07:09.649 } 00:07:09.649 }' 00:07:09.649 23:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:09.649 pt2' 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.649 [2024-12-06 23:41:21.147004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 65c3b154-7069-49e9-bbb2-0380d1e5b6b5 '!=' 65c3b154-7069-49e9-bbb2-0380d1e5b6b5 ']' 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61139 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61139 ']' 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61139 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.649 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61139 00:07:09.908 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.908 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.908 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61139' 00:07:09.908 killing process with pid 61139 00:07:09.908 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61139 00:07:09.908 [2024-12-06 23:41:21.234027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.908 [2024-12-06 23:41:21.234166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.908 [2024-12-06 23:41:21.234249] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 23:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61139 00:07:09.908 ee all in destruct 00:07:09.908 [2024-12-06 23:41:21.234288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:09.908 [2024-12-06 23:41:21.433653] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.284 23:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:11.284 00:07:11.284 real 0m4.459s 00:07:11.284 user 0m6.249s 00:07:11.284 sys 0m0.745s 00:07:11.284 23:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.284 23:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.284 ************************************ 00:07:11.284 END TEST raid_superblock_test 00:07:11.284 ************************************ 00:07:11.284 23:41:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:11.284 23:41:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:11.284 23:41:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.284 23:41:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.284 ************************************ 00:07:11.284 START TEST raid_read_error_test 00:07:11.284 ************************************ 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:11.284 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0jX72ApucN 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61345 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61345 00:07:11.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61345 ']' 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.285 23:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.285 [2024-12-06 23:41:22.698378] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:07:11.285 [2024-12-06 23:41:22.698493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61345 ] 00:07:11.544 [2024-12-06 23:41:22.873676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.544 [2024-12-06 23:41:22.981170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.803 [2024-12-06 23:41:23.174435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.803 [2024-12-06 23:41:23.174464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.063 BaseBdev1_malloc 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.063 true 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.063 [2024-12-06 23:41:23.582582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:12.063 [2024-12-06 23:41:23.582641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.063 [2024-12-06 23:41:23.582693] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:12.063 [2024-12-06 23:41:23.582704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.063 [2024-12-06 23:41:23.584694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.063 [2024-12-06 23:41:23.584728] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:12.063 BaseBdev1 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.063 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.322 BaseBdev2_malloc 00:07:12.322 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.322 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:12.322 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.322 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.322 true 00:07:12.322 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.322 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:12.322 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.322 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.322 [2024-12-06 23:41:23.648839] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:12.322 [2024-12-06 23:41:23.648892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.322 [2024-12-06 23:41:23.648922] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:12.322 [2024-12-06 23:41:23.648932] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.322 [2024-12-06 23:41:23.650914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.322 [2024-12-06 23:41:23.650955] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:12.322 BaseBdev2 00:07:12.322 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.322 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.323 [2024-12-06 23:41:23.660879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.323 [2024-12-06 23:41:23.662595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:12.323 [2024-12-06 23:41:23.662803] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:12.323 [2024-12-06 23:41:23.662828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.323 [2024-12-06 23:41:23.663047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:12.323 [2024-12-06 23:41:23.663205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:12.323 [2024-12-06 23:41:23.663221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:12.323 [2024-12-06 23:41:23.663361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.323 "name": "raid_bdev1", 00:07:12.323 "uuid": "f099c24a-3de0-4202-85fc-04fd84c7cc27", 00:07:12.323 "strip_size_kb": 64, 00:07:12.323 "state": "online", 00:07:12.323 "raid_level": "raid0", 00:07:12.323 "superblock": true, 00:07:12.323 "num_base_bdevs": 2, 00:07:12.323 "num_base_bdevs_discovered": 2, 00:07:12.323 "num_base_bdevs_operational": 2, 00:07:12.323 "base_bdevs_list": [ 00:07:12.323 { 00:07:12.323 "name": "BaseBdev1", 00:07:12.323 "uuid": "1de15011-5e33-5f0c-abd5-080676499f96", 00:07:12.323 "is_configured": true, 00:07:12.323 "data_offset": 2048, 00:07:12.323 "data_size": 63488 00:07:12.323 }, 00:07:12.323 { 00:07:12.323 "name": "BaseBdev2", 00:07:12.323 "uuid": "529ef638-53f2-58a3-a889-84334cea217c", 00:07:12.323 "is_configured": true, 00:07:12.323 "data_offset": 2048, 00:07:12.323 "data_size": 63488 00:07:12.323 } 00:07:12.323 ] 00:07:12.323 }' 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.323 23:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.582 23:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:12.582 23:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:12.582 [2024-12-06 23:41:24.141366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.519 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.776 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.776 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.776 "name": "raid_bdev1", 00:07:13.776 "uuid": "f099c24a-3de0-4202-85fc-04fd84c7cc27", 00:07:13.776 "strip_size_kb": 64, 00:07:13.776 "state": "online", 00:07:13.776 "raid_level": "raid0", 00:07:13.776 "superblock": true, 00:07:13.776 "num_base_bdevs": 2, 00:07:13.776 "num_base_bdevs_discovered": 2, 00:07:13.776 "num_base_bdevs_operational": 2, 00:07:13.776 "base_bdevs_list": [ 00:07:13.776 { 00:07:13.776 "name": "BaseBdev1", 00:07:13.776 "uuid": "1de15011-5e33-5f0c-abd5-080676499f96", 00:07:13.776 "is_configured": true, 00:07:13.776 "data_offset": 2048, 00:07:13.776 "data_size": 63488 00:07:13.776 }, 00:07:13.776 { 00:07:13.776 "name": "BaseBdev2", 00:07:13.776 "uuid": "529ef638-53f2-58a3-a889-84334cea217c", 00:07:13.776 "is_configured": true, 00:07:13.776 "data_offset": 2048, 00:07:13.776 "data_size": 63488 00:07:13.776 } 00:07:13.776 ] 00:07:13.777 }' 00:07:13.777 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.777 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.035 [2024-12-06 23:41:25.537030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:14.035 [2024-12-06 23:41:25.537070] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:14.035 [2024-12-06 23:41:25.539822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.035 [2024-12-06 23:41:25.539870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.035 [2024-12-06 23:41:25.539902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.035 [2024-12-06 23:41:25.539913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:14.035 { 00:07:14.035 "results": [ 00:07:14.035 { 00:07:14.035 "job": "raid_bdev1", 00:07:14.035 "core_mask": "0x1", 00:07:14.035 "workload": "randrw", 00:07:14.035 "percentage": 50, 00:07:14.035 "status": "finished", 00:07:14.035 "queue_depth": 1, 00:07:14.035 "io_size": 131072, 00:07:14.035 "runtime": 1.396671, 00:07:14.035 "iops": 16131.21486735244, 00:07:14.035 "mibps": 2016.401858419055, 00:07:14.035 "io_failed": 1, 00:07:14.035 "io_timeout": 0, 00:07:14.035 "avg_latency_us": 85.73004343942233, 00:07:14.035 "min_latency_us": 25.9353711790393, 00:07:14.035 "max_latency_us": 1402.2986899563318 00:07:14.035 } 00:07:14.035 ], 00:07:14.035 "core_count": 1 00:07:14.035 } 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61345 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61345 ']' 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61345 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61345 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.035 killing process with pid 61345 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61345' 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61345 00:07:14.035 [2024-12-06 23:41:25.583644] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.035 23:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61345 00:07:14.293 [2024-12-06 23:41:25.716916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.671 23:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0jX72ApucN 00:07:15.671 23:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:15.671 23:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:15.671 23:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:15.671 23:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:15.671 23:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.671 23:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:15.671 23:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:15.671 00:07:15.671 real 0m4.258s 00:07:15.671 user 0m5.102s 00:07:15.671 sys 0m0.522s 00:07:15.671 23:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.671 23:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.671 ************************************ 00:07:15.671 END TEST raid_read_error_test 00:07:15.671 ************************************ 00:07:15.671 23:41:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:15.671 23:41:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:15.671 23:41:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.671 23:41:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.671 ************************************ 00:07:15.671 START TEST raid_write_error_test 00:07:15.671 ************************************ 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GUgkQy7eF7 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61489 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61489 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61489 ']' 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.671 23:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.672 [2024-12-06 23:41:27.025493] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:07:15.672 [2024-12-06 23:41:27.025610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61489 ] 00:07:15.672 [2024-12-06 23:41:27.202442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.930 [2024-12-06 23:41:27.313248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.188 [2024-12-06 23:41:27.512387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.188 [2024-12-06 23:41:27.512453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 BaseBdev1_malloc 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 true 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 [2024-12-06 23:41:27.911118] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:16.447 [2024-12-06 23:41:27.911178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.447 [2024-12-06 23:41:27.911197] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:16.447 [2024-12-06 23:41:27.911208] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.447 [2024-12-06 23:41:27.913222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.447 [2024-12-06 23:41:27.913261] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:16.447 BaseBdev1 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 BaseBdev2_malloc 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 true 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 [2024-12-06 23:41:27.976450] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:16.447 [2024-12-06 23:41:27.976519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.447 [2024-12-06 23:41:27.976534] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:16.447 [2024-12-06 23:41:27.976544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.447 [2024-12-06 23:41:27.978515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.447 [2024-12-06 23:41:27.978550] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:16.447 BaseBdev2 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 [2024-12-06 23:41:27.988495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:16.447 [2024-12-06 23:41:27.990337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:16.447 [2024-12-06 23:41:27.990531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:16.447 [2024-12-06 23:41:27.990556] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:16.447 [2024-12-06 23:41:27.990812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:16.447 [2024-12-06 23:41:27.991008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:16.447 [2024-12-06 23:41:27.991025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:16.447 [2024-12-06 23:41:27.991197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.447 23:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.447 23:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.447 23:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.705 23:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.705 23:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.705 "name": "raid_bdev1", 00:07:16.705 "uuid": "658649e0-c419-461b-acb7-f7acc2fac32e", 00:07:16.705 "strip_size_kb": 64, 00:07:16.705 "state": "online", 00:07:16.705 "raid_level": "raid0", 00:07:16.705 "superblock": true, 00:07:16.705 "num_base_bdevs": 2, 00:07:16.705 "num_base_bdevs_discovered": 2, 00:07:16.705 "num_base_bdevs_operational": 2, 00:07:16.705 "base_bdevs_list": [ 00:07:16.705 { 00:07:16.705 "name": "BaseBdev1", 00:07:16.705 "uuid": "047a27cf-2879-53ae-9749-8aad0627fa76", 00:07:16.705 "is_configured": true, 00:07:16.705 "data_offset": 2048, 00:07:16.705 "data_size": 63488 00:07:16.705 }, 00:07:16.705 { 00:07:16.705 "name": "BaseBdev2", 00:07:16.705 "uuid": "12cadeb0-be91-578f-8cfa-f983eee3321f", 00:07:16.705 "is_configured": true, 00:07:16.705 "data_offset": 2048, 00:07:16.705 "data_size": 63488 00:07:16.705 } 00:07:16.705 ] 00:07:16.705 }' 00:07:16.705 23:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.705 23:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.969 23:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:16.969 23:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:16.969 [2024-12-06 23:41:28.509033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.912 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.171 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.171 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.171 "name": "raid_bdev1", 00:07:18.171 "uuid": "658649e0-c419-461b-acb7-f7acc2fac32e", 00:07:18.171 "strip_size_kb": 64, 00:07:18.171 "state": "online", 00:07:18.171 "raid_level": "raid0", 00:07:18.171 "superblock": true, 00:07:18.171 "num_base_bdevs": 2, 00:07:18.171 "num_base_bdevs_discovered": 2, 00:07:18.171 "num_base_bdevs_operational": 2, 00:07:18.171 "base_bdevs_list": [ 00:07:18.171 { 00:07:18.171 "name": "BaseBdev1", 00:07:18.171 "uuid": "047a27cf-2879-53ae-9749-8aad0627fa76", 00:07:18.171 "is_configured": true, 00:07:18.171 "data_offset": 2048, 00:07:18.171 "data_size": 63488 00:07:18.171 }, 00:07:18.171 { 00:07:18.171 "name": "BaseBdev2", 00:07:18.171 "uuid": "12cadeb0-be91-578f-8cfa-f983eee3321f", 00:07:18.171 "is_configured": true, 00:07:18.171 "data_offset": 2048, 00:07:18.171 "data_size": 63488 00:07:18.171 } 00:07:18.171 ] 00:07:18.171 }' 00:07:18.171 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.171 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.430 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:18.430 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.430 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.430 [2024-12-06 23:41:29.961441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:18.430 [2024-12-06 23:41:29.961485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.430 [2024-12-06 23:41:29.964482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.430 [2024-12-06 23:41:29.964531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.430 [2024-12-06 23:41:29.964564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.430 [2024-12-06 23:41:29.964576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:18.430 { 00:07:18.430 "results": [ 00:07:18.430 { 00:07:18.430 "job": "raid_bdev1", 00:07:18.430 "core_mask": "0x1", 00:07:18.430 "workload": "randrw", 00:07:18.430 "percentage": 50, 00:07:18.430 "status": "finished", 00:07:18.430 "queue_depth": 1, 00:07:18.430 "io_size": 131072, 00:07:18.430 "runtime": 1.453597, 00:07:18.430 "iops": 16098.684848689149, 00:07:18.430 "mibps": 2012.3356060861436, 00:07:18.430 "io_failed": 1, 00:07:18.430 "io_timeout": 0, 00:07:18.430 "avg_latency_us": 85.91596388768323, 00:07:18.430 "min_latency_us": 25.2646288209607, 00:07:18.430 "max_latency_us": 1423.7624454148472 00:07:18.430 } 00:07:18.430 ], 00:07:18.430 "core_count": 1 00:07:18.430 } 00:07:18.430 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.430 23:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61489 00:07:18.430 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61489 ']' 00:07:18.430 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61489 00:07:18.430 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:18.430 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.430 23:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61489 00:07:18.688 23:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.688 23:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.688 killing process with pid 61489 00:07:18.688 23:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61489' 00:07:18.688 23:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61489 00:07:18.688 [2024-12-06 23:41:30.004209] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.688 23:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61489 00:07:18.688 [2024-12-06 23:41:30.135455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.065 23:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GUgkQy7eF7 00:07:20.065 23:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:20.065 23:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:20.065 23:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:07:20.065 23:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:20.065 23:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:20.065 23:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:20.065 23:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:07:20.065 00:07:20.065 real 0m4.344s 00:07:20.065 user 0m5.261s 00:07:20.065 sys 0m0.512s 00:07:20.065 23:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.065 23:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.065 ************************************ 00:07:20.065 END TEST raid_write_error_test 00:07:20.065 ************************************ 00:07:20.065 23:41:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:20.065 23:41:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:20.065 23:41:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:20.065 23:41:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.065 23:41:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.065 ************************************ 00:07:20.065 START TEST raid_state_function_test 00:07:20.065 ************************************ 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61628 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61628' 00:07:20.065 Process raid pid: 61628 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61628 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61628 ']' 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.065 23:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.065 [2024-12-06 23:41:31.421601] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:07:20.065 [2024-12-06 23:41:31.421732] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.065 [2024-12-06 23:41:31.597629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.324 [2024-12-06 23:41:31.711161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.582 [2024-12-06 23:41:31.914918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.582 [2024-12-06 23:41:31.914957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.841 [2024-12-06 23:41:32.266295] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.841 [2024-12-06 23:41:32.266357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.841 [2024-12-06 23:41:32.266368] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.841 [2024-12-06 23:41:32.266377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.841 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.841 "name": "Existed_Raid", 00:07:20.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.841 "strip_size_kb": 64, 00:07:20.841 "state": "configuring", 00:07:20.841 "raid_level": "concat", 00:07:20.841 "superblock": false, 00:07:20.841 "num_base_bdevs": 2, 00:07:20.841 "num_base_bdevs_discovered": 0, 00:07:20.841 "num_base_bdevs_operational": 2, 00:07:20.841 "base_bdevs_list": [ 00:07:20.841 { 00:07:20.841 "name": "BaseBdev1", 00:07:20.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.841 "is_configured": false, 00:07:20.841 "data_offset": 0, 00:07:20.841 "data_size": 0 00:07:20.841 }, 00:07:20.841 { 00:07:20.841 "name": "BaseBdev2", 00:07:20.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.841 "is_configured": false, 00:07:20.841 "data_offset": 0, 00:07:20.841 "data_size": 0 00:07:20.841 } 00:07:20.841 ] 00:07:20.841 }' 00:07:20.842 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.842 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.409 [2024-12-06 23:41:32.685550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.409 [2024-12-06 23:41:32.685638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.409 [2024-12-06 23:41:32.697521] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.409 [2024-12-06 23:41:32.697603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.409 [2024-12-06 23:41:32.697631] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.409 [2024-12-06 23:41:32.697667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.409 [2024-12-06 23:41:32.743628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.409 BaseBdev1 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.409 [ 00:07:21.409 { 00:07:21.409 "name": "BaseBdev1", 00:07:21.409 "aliases": [ 00:07:21.409 "5b913439-609d-41d1-9174-6585454581f1" 00:07:21.409 ], 00:07:21.409 "product_name": "Malloc disk", 00:07:21.409 "block_size": 512, 00:07:21.409 "num_blocks": 65536, 00:07:21.409 "uuid": "5b913439-609d-41d1-9174-6585454581f1", 00:07:21.409 "assigned_rate_limits": { 00:07:21.409 "rw_ios_per_sec": 0, 00:07:21.409 "rw_mbytes_per_sec": 0, 00:07:21.409 "r_mbytes_per_sec": 0, 00:07:21.409 "w_mbytes_per_sec": 0 00:07:21.409 }, 00:07:21.409 "claimed": true, 00:07:21.409 "claim_type": "exclusive_write", 00:07:21.409 "zoned": false, 00:07:21.409 "supported_io_types": { 00:07:21.409 "read": true, 00:07:21.409 "write": true, 00:07:21.409 "unmap": true, 00:07:21.409 "flush": true, 00:07:21.409 "reset": true, 00:07:21.409 "nvme_admin": false, 00:07:21.409 "nvme_io": false, 00:07:21.409 "nvme_io_md": false, 00:07:21.409 "write_zeroes": true, 00:07:21.409 "zcopy": true, 00:07:21.409 "get_zone_info": false, 00:07:21.409 "zone_management": false, 00:07:21.409 "zone_append": false, 00:07:21.409 "compare": false, 00:07:21.409 "compare_and_write": false, 00:07:21.409 "abort": true, 00:07:21.409 "seek_hole": false, 00:07:21.409 "seek_data": false, 00:07:21.409 "copy": true, 00:07:21.409 "nvme_iov_md": false 00:07:21.409 }, 00:07:21.409 "memory_domains": [ 00:07:21.409 { 00:07:21.409 "dma_device_id": "system", 00:07:21.409 "dma_device_type": 1 00:07:21.409 }, 00:07:21.409 { 00:07:21.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.409 "dma_device_type": 2 00:07:21.409 } 00:07:21.409 ], 00:07:21.409 "driver_specific": {} 00:07:21.409 } 00:07:21.409 ] 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.409 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.409 "name": "Existed_Raid", 00:07:21.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.409 "strip_size_kb": 64, 00:07:21.409 "state": "configuring", 00:07:21.410 "raid_level": "concat", 00:07:21.410 "superblock": false, 00:07:21.410 "num_base_bdevs": 2, 00:07:21.410 "num_base_bdevs_discovered": 1, 00:07:21.410 "num_base_bdevs_operational": 2, 00:07:21.410 "base_bdevs_list": [ 00:07:21.410 { 00:07:21.410 "name": "BaseBdev1", 00:07:21.410 "uuid": "5b913439-609d-41d1-9174-6585454581f1", 00:07:21.410 "is_configured": true, 00:07:21.410 "data_offset": 0, 00:07:21.410 "data_size": 65536 00:07:21.410 }, 00:07:21.410 { 00:07:21.410 "name": "BaseBdev2", 00:07:21.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.410 "is_configured": false, 00:07:21.410 "data_offset": 0, 00:07:21.410 "data_size": 0 00:07:21.410 } 00:07:21.410 ] 00:07:21.410 }' 00:07:21.410 23:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.410 23:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.668 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:21.668 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.668 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.668 [2024-12-06 23:41:33.198884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.668 [2024-12-06 23:41:33.198986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:21.668 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.668 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.668 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.668 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.668 [2024-12-06 23:41:33.210893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.668 [2024-12-06 23:41:33.212675] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.668 [2024-12-06 23:41:33.212733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.668 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.668 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:21.668 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:21.668 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.669 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.928 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.928 "name": "Existed_Raid", 00:07:21.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.928 "strip_size_kb": 64, 00:07:21.928 "state": "configuring", 00:07:21.928 "raid_level": "concat", 00:07:21.928 "superblock": false, 00:07:21.928 "num_base_bdevs": 2, 00:07:21.928 "num_base_bdevs_discovered": 1, 00:07:21.928 "num_base_bdevs_operational": 2, 00:07:21.928 "base_bdevs_list": [ 00:07:21.928 { 00:07:21.928 "name": "BaseBdev1", 00:07:21.928 "uuid": "5b913439-609d-41d1-9174-6585454581f1", 00:07:21.928 "is_configured": true, 00:07:21.928 "data_offset": 0, 00:07:21.928 "data_size": 65536 00:07:21.928 }, 00:07:21.928 { 00:07:21.928 "name": "BaseBdev2", 00:07:21.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.928 "is_configured": false, 00:07:21.928 "data_offset": 0, 00:07:21.928 "data_size": 0 00:07:21.928 } 00:07:21.928 ] 00:07:21.928 }' 00:07:21.928 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.928 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.186 [2024-12-06 23:41:33.716655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:22.186 [2024-12-06 23:41:33.716806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:22.186 [2024-12-06 23:41:33.716831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:22.186 [2024-12-06 23:41:33.717146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.186 [2024-12-06 23:41:33.717367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:22.186 [2024-12-06 23:41:33.717415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:22.186 [2024-12-06 23:41:33.717729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.186 BaseBdev2 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.186 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.186 [ 00:07:22.186 { 00:07:22.186 "name": "BaseBdev2", 00:07:22.186 "aliases": [ 00:07:22.186 "ed952d79-0093-4d8a-b0f4-2db95c2650a9" 00:07:22.186 ], 00:07:22.186 "product_name": "Malloc disk", 00:07:22.186 "block_size": 512, 00:07:22.186 "num_blocks": 65536, 00:07:22.186 "uuid": "ed952d79-0093-4d8a-b0f4-2db95c2650a9", 00:07:22.186 "assigned_rate_limits": { 00:07:22.186 "rw_ios_per_sec": 0, 00:07:22.186 "rw_mbytes_per_sec": 0, 00:07:22.186 "r_mbytes_per_sec": 0, 00:07:22.186 "w_mbytes_per_sec": 0 00:07:22.186 }, 00:07:22.186 "claimed": true, 00:07:22.186 "claim_type": "exclusive_write", 00:07:22.186 "zoned": false, 00:07:22.186 "supported_io_types": { 00:07:22.186 "read": true, 00:07:22.186 "write": true, 00:07:22.186 "unmap": true, 00:07:22.186 "flush": true, 00:07:22.186 "reset": true, 00:07:22.186 "nvme_admin": false, 00:07:22.187 "nvme_io": false, 00:07:22.445 "nvme_io_md": false, 00:07:22.445 "write_zeroes": true, 00:07:22.445 "zcopy": true, 00:07:22.445 "get_zone_info": false, 00:07:22.445 "zone_management": false, 00:07:22.445 "zone_append": false, 00:07:22.445 "compare": false, 00:07:22.445 "compare_and_write": false, 00:07:22.445 "abort": true, 00:07:22.445 "seek_hole": false, 00:07:22.445 "seek_data": false, 00:07:22.445 "copy": true, 00:07:22.445 "nvme_iov_md": false 00:07:22.445 }, 00:07:22.445 "memory_domains": [ 00:07:22.445 { 00:07:22.445 "dma_device_id": "system", 00:07:22.445 "dma_device_type": 1 00:07:22.445 }, 00:07:22.445 { 00:07:22.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.445 "dma_device_type": 2 00:07:22.445 } 00:07:22.445 ], 00:07:22.445 "driver_specific": {} 00:07:22.445 } 00:07:22.445 ] 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.445 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.445 "name": "Existed_Raid", 00:07:22.445 "uuid": "a889047b-0b52-4a41-8f05-85494a78ce84", 00:07:22.445 "strip_size_kb": 64, 00:07:22.445 "state": "online", 00:07:22.445 "raid_level": "concat", 00:07:22.445 "superblock": false, 00:07:22.445 "num_base_bdevs": 2, 00:07:22.445 "num_base_bdevs_discovered": 2, 00:07:22.445 "num_base_bdevs_operational": 2, 00:07:22.446 "base_bdevs_list": [ 00:07:22.446 { 00:07:22.446 "name": "BaseBdev1", 00:07:22.446 "uuid": "5b913439-609d-41d1-9174-6585454581f1", 00:07:22.446 "is_configured": true, 00:07:22.446 "data_offset": 0, 00:07:22.446 "data_size": 65536 00:07:22.446 }, 00:07:22.446 { 00:07:22.446 "name": "BaseBdev2", 00:07:22.446 "uuid": "ed952d79-0093-4d8a-b0f4-2db95c2650a9", 00:07:22.446 "is_configured": true, 00:07:22.446 "data_offset": 0, 00:07:22.446 "data_size": 65536 00:07:22.446 } 00:07:22.446 ] 00:07:22.446 }' 00:07:22.446 23:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.446 23:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.704 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:22.704 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:22.704 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:22.704 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:22.704 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:22.704 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:22.704 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:22.704 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.704 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.704 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:22.704 [2024-12-06 23:41:34.232108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.704 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:22.963 "name": "Existed_Raid", 00:07:22.963 "aliases": [ 00:07:22.963 "a889047b-0b52-4a41-8f05-85494a78ce84" 00:07:22.963 ], 00:07:22.963 "product_name": "Raid Volume", 00:07:22.963 "block_size": 512, 00:07:22.963 "num_blocks": 131072, 00:07:22.963 "uuid": "a889047b-0b52-4a41-8f05-85494a78ce84", 00:07:22.963 "assigned_rate_limits": { 00:07:22.963 "rw_ios_per_sec": 0, 00:07:22.963 "rw_mbytes_per_sec": 0, 00:07:22.963 "r_mbytes_per_sec": 0, 00:07:22.963 "w_mbytes_per_sec": 0 00:07:22.963 }, 00:07:22.963 "claimed": false, 00:07:22.963 "zoned": false, 00:07:22.963 "supported_io_types": { 00:07:22.963 "read": true, 00:07:22.963 "write": true, 00:07:22.963 "unmap": true, 00:07:22.963 "flush": true, 00:07:22.963 "reset": true, 00:07:22.963 "nvme_admin": false, 00:07:22.963 "nvme_io": false, 00:07:22.963 "nvme_io_md": false, 00:07:22.963 "write_zeroes": true, 00:07:22.963 "zcopy": false, 00:07:22.963 "get_zone_info": false, 00:07:22.963 "zone_management": false, 00:07:22.963 "zone_append": false, 00:07:22.963 "compare": false, 00:07:22.963 "compare_and_write": false, 00:07:22.963 "abort": false, 00:07:22.963 "seek_hole": false, 00:07:22.963 "seek_data": false, 00:07:22.963 "copy": false, 00:07:22.963 "nvme_iov_md": false 00:07:22.963 }, 00:07:22.963 "memory_domains": [ 00:07:22.963 { 00:07:22.963 "dma_device_id": "system", 00:07:22.963 "dma_device_type": 1 00:07:22.963 }, 00:07:22.963 { 00:07:22.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.963 "dma_device_type": 2 00:07:22.963 }, 00:07:22.963 { 00:07:22.963 "dma_device_id": "system", 00:07:22.963 "dma_device_type": 1 00:07:22.963 }, 00:07:22.963 { 00:07:22.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.963 "dma_device_type": 2 00:07:22.963 } 00:07:22.963 ], 00:07:22.963 "driver_specific": { 00:07:22.963 "raid": { 00:07:22.963 "uuid": "a889047b-0b52-4a41-8f05-85494a78ce84", 00:07:22.963 "strip_size_kb": 64, 00:07:22.963 "state": "online", 00:07:22.963 "raid_level": "concat", 00:07:22.963 "superblock": false, 00:07:22.963 "num_base_bdevs": 2, 00:07:22.963 "num_base_bdevs_discovered": 2, 00:07:22.963 "num_base_bdevs_operational": 2, 00:07:22.963 "base_bdevs_list": [ 00:07:22.963 { 00:07:22.963 "name": "BaseBdev1", 00:07:22.963 "uuid": "5b913439-609d-41d1-9174-6585454581f1", 00:07:22.963 "is_configured": true, 00:07:22.963 "data_offset": 0, 00:07:22.963 "data_size": 65536 00:07:22.963 }, 00:07:22.963 { 00:07:22.963 "name": "BaseBdev2", 00:07:22.963 "uuid": "ed952d79-0093-4d8a-b0f4-2db95c2650a9", 00:07:22.963 "is_configured": true, 00:07:22.963 "data_offset": 0, 00:07:22.963 "data_size": 65536 00:07:22.963 } 00:07:22.963 ] 00:07:22.963 } 00:07:22.963 } 00:07:22.963 }' 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:22.963 BaseBdev2' 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:22.963 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.964 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.964 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.964 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.964 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.964 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.964 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:22.964 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.964 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.964 [2024-12-06 23:41:34.491465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:22.964 [2024-12-06 23:41:34.491556] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.964 [2024-12-06 23:41:34.491643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.222 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.222 "name": "Existed_Raid", 00:07:23.222 "uuid": "a889047b-0b52-4a41-8f05-85494a78ce84", 00:07:23.222 "strip_size_kb": 64, 00:07:23.222 "state": "offline", 00:07:23.222 "raid_level": "concat", 00:07:23.222 "superblock": false, 00:07:23.222 "num_base_bdevs": 2, 00:07:23.222 "num_base_bdevs_discovered": 1, 00:07:23.222 "num_base_bdevs_operational": 1, 00:07:23.222 "base_bdevs_list": [ 00:07:23.222 { 00:07:23.222 "name": null, 00:07:23.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.223 "is_configured": false, 00:07:23.223 "data_offset": 0, 00:07:23.223 "data_size": 65536 00:07:23.223 }, 00:07:23.223 { 00:07:23.223 "name": "BaseBdev2", 00:07:23.223 "uuid": "ed952d79-0093-4d8a-b0f4-2db95c2650a9", 00:07:23.223 "is_configured": true, 00:07:23.223 "data_offset": 0, 00:07:23.223 "data_size": 65536 00:07:23.223 } 00:07:23.223 ] 00:07:23.223 }' 00:07:23.223 23:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.223 23:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.481 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:23.481 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.481 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.481 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.481 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:23.481 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.481 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.739 [2024-12-06 23:41:35.069407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:23.739 [2024-12-06 23:41:35.069461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61628 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61628 ']' 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61628 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61628 00:07:23.739 killing process with pid 61628 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61628' 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61628 00:07:23.739 [2024-12-06 23:41:35.244931] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.739 23:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61628 00:07:23.739 [2024-12-06 23:41:35.262084] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:25.114 00:07:25.114 real 0m5.011s 00:07:25.114 user 0m7.283s 00:07:25.114 sys 0m0.816s 00:07:25.114 ************************************ 00:07:25.114 END TEST raid_state_function_test 00:07:25.114 ************************************ 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.114 23:41:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:25.114 23:41:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:25.114 23:41:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.114 23:41:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.114 ************************************ 00:07:25.114 START TEST raid_state_function_test_sb 00:07:25.114 ************************************ 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61876 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61876' 00:07:25.114 Process raid pid: 61876 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61876 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61876 ']' 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.114 23:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.114 [2024-12-06 23:41:36.509283] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:07:25.114 [2024-12-06 23:41:36.509519] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.373 [2024-12-06 23:41:36.686710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.373 [2024-12-06 23:41:36.798425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.631 [2024-12-06 23:41:36.995907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.631 [2024-12-06 23:41:36.996028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.891 [2024-12-06 23:41:37.336623] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.891 [2024-12-06 23:41:37.336753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.891 [2024-12-06 23:41:37.336788] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.891 [2024-12-06 23:41:37.336812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.891 "name": "Existed_Raid", 00:07:25.891 "uuid": "1afbe8d9-a92f-4b89-9781-864ebf1f0a96", 00:07:25.891 "strip_size_kb": 64, 00:07:25.891 "state": "configuring", 00:07:25.891 "raid_level": "concat", 00:07:25.891 "superblock": true, 00:07:25.891 "num_base_bdevs": 2, 00:07:25.891 "num_base_bdevs_discovered": 0, 00:07:25.891 "num_base_bdevs_operational": 2, 00:07:25.891 "base_bdevs_list": [ 00:07:25.891 { 00:07:25.891 "name": "BaseBdev1", 00:07:25.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.891 "is_configured": false, 00:07:25.891 "data_offset": 0, 00:07:25.891 "data_size": 0 00:07:25.891 }, 00:07:25.891 { 00:07:25.891 "name": "BaseBdev2", 00:07:25.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.891 "is_configured": false, 00:07:25.891 "data_offset": 0, 00:07:25.891 "data_size": 0 00:07:25.891 } 00:07:25.891 ] 00:07:25.891 }' 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.891 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.458 [2024-12-06 23:41:37.815786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.458 [2024-12-06 23:41:37.815865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.458 [2024-12-06 23:41:37.827764] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.458 [2024-12-06 23:41:37.827804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.458 [2024-12-06 23:41:37.827813] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.458 [2024-12-06 23:41:37.827840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.458 [2024-12-06 23:41:37.874916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.458 BaseBdev1 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.458 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.458 [ 00:07:26.458 { 00:07:26.458 "name": "BaseBdev1", 00:07:26.458 "aliases": [ 00:07:26.458 "5c7ba35a-9f89-4a07-ab13-a22d614f3a32" 00:07:26.458 ], 00:07:26.458 "product_name": "Malloc disk", 00:07:26.458 "block_size": 512, 00:07:26.458 "num_blocks": 65536, 00:07:26.458 "uuid": "5c7ba35a-9f89-4a07-ab13-a22d614f3a32", 00:07:26.458 "assigned_rate_limits": { 00:07:26.458 "rw_ios_per_sec": 0, 00:07:26.458 "rw_mbytes_per_sec": 0, 00:07:26.458 "r_mbytes_per_sec": 0, 00:07:26.458 "w_mbytes_per_sec": 0 00:07:26.458 }, 00:07:26.458 "claimed": true, 00:07:26.458 "claim_type": "exclusive_write", 00:07:26.458 "zoned": false, 00:07:26.458 "supported_io_types": { 00:07:26.458 "read": true, 00:07:26.458 "write": true, 00:07:26.458 "unmap": true, 00:07:26.458 "flush": true, 00:07:26.458 "reset": true, 00:07:26.458 "nvme_admin": false, 00:07:26.458 "nvme_io": false, 00:07:26.458 "nvme_io_md": false, 00:07:26.458 "write_zeroes": true, 00:07:26.458 "zcopy": true, 00:07:26.458 "get_zone_info": false, 00:07:26.458 "zone_management": false, 00:07:26.458 "zone_append": false, 00:07:26.458 "compare": false, 00:07:26.458 "compare_and_write": false, 00:07:26.458 "abort": true, 00:07:26.458 "seek_hole": false, 00:07:26.458 "seek_data": false, 00:07:26.458 "copy": true, 00:07:26.458 "nvme_iov_md": false 00:07:26.458 }, 00:07:26.458 "memory_domains": [ 00:07:26.458 { 00:07:26.459 "dma_device_id": "system", 00:07:26.459 "dma_device_type": 1 00:07:26.459 }, 00:07:26.459 { 00:07:26.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.459 "dma_device_type": 2 00:07:26.459 } 00:07:26.459 ], 00:07:26.459 "driver_specific": {} 00:07:26.459 } 00:07:26.459 ] 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.459 "name": "Existed_Raid", 00:07:26.459 "uuid": "be60d9de-5d85-4306-b639-03ebf349f638", 00:07:26.459 "strip_size_kb": 64, 00:07:26.459 "state": "configuring", 00:07:26.459 "raid_level": "concat", 00:07:26.459 "superblock": true, 00:07:26.459 "num_base_bdevs": 2, 00:07:26.459 "num_base_bdevs_discovered": 1, 00:07:26.459 "num_base_bdevs_operational": 2, 00:07:26.459 "base_bdevs_list": [ 00:07:26.459 { 00:07:26.459 "name": "BaseBdev1", 00:07:26.459 "uuid": "5c7ba35a-9f89-4a07-ab13-a22d614f3a32", 00:07:26.459 "is_configured": true, 00:07:26.459 "data_offset": 2048, 00:07:26.459 "data_size": 63488 00:07:26.459 }, 00:07:26.459 { 00:07:26.459 "name": "BaseBdev2", 00:07:26.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.459 "is_configured": false, 00:07:26.459 "data_offset": 0, 00:07:26.459 "data_size": 0 00:07:26.459 } 00:07:26.459 ] 00:07:26.459 }' 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.459 23:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.029 [2024-12-06 23:41:38.366818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.029 [2024-12-06 23:41:38.366935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.029 [2024-12-06 23:41:38.378859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.029 [2024-12-06 23:41:38.380729] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.029 [2024-12-06 23:41:38.380806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.029 "name": "Existed_Raid", 00:07:27.029 "uuid": "32355959-bc9e-4d9e-8d1d-eb4fb48d1f6b", 00:07:27.029 "strip_size_kb": 64, 00:07:27.029 "state": "configuring", 00:07:27.029 "raid_level": "concat", 00:07:27.029 "superblock": true, 00:07:27.029 "num_base_bdevs": 2, 00:07:27.029 "num_base_bdevs_discovered": 1, 00:07:27.029 "num_base_bdevs_operational": 2, 00:07:27.029 "base_bdevs_list": [ 00:07:27.029 { 00:07:27.029 "name": "BaseBdev1", 00:07:27.029 "uuid": "5c7ba35a-9f89-4a07-ab13-a22d614f3a32", 00:07:27.029 "is_configured": true, 00:07:27.029 "data_offset": 2048, 00:07:27.029 "data_size": 63488 00:07:27.029 }, 00:07:27.029 { 00:07:27.029 "name": "BaseBdev2", 00:07:27.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.029 "is_configured": false, 00:07:27.029 "data_offset": 0, 00:07:27.029 "data_size": 0 00:07:27.029 } 00:07:27.029 ] 00:07:27.029 }' 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.029 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.288 [2024-12-06 23:41:38.825765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:27.288 [2024-12-06 23:41:38.826069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:27.288 [2024-12-06 23:41:38.826120] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:27.288 [2024-12-06 23:41:38.826389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:27.288 BaseBdev2 00:07:27.288 [2024-12-06 23:41:38.826589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:27.288 [2024-12-06 23:41:38.826605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:27.288 [2024-12-06 23:41:38.826763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.288 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.288 [ 00:07:27.288 { 00:07:27.288 "name": "BaseBdev2", 00:07:27.288 "aliases": [ 00:07:27.288 "8124f075-2be1-4ac5-a256-fd2935016279" 00:07:27.288 ], 00:07:27.288 "product_name": "Malloc disk", 00:07:27.288 "block_size": 512, 00:07:27.288 "num_blocks": 65536, 00:07:27.288 "uuid": "8124f075-2be1-4ac5-a256-fd2935016279", 00:07:27.288 "assigned_rate_limits": { 00:07:27.288 "rw_ios_per_sec": 0, 00:07:27.548 "rw_mbytes_per_sec": 0, 00:07:27.548 "r_mbytes_per_sec": 0, 00:07:27.548 "w_mbytes_per_sec": 0 00:07:27.548 }, 00:07:27.548 "claimed": true, 00:07:27.548 "claim_type": "exclusive_write", 00:07:27.548 "zoned": false, 00:07:27.548 "supported_io_types": { 00:07:27.548 "read": true, 00:07:27.548 "write": true, 00:07:27.548 "unmap": true, 00:07:27.548 "flush": true, 00:07:27.548 "reset": true, 00:07:27.548 "nvme_admin": false, 00:07:27.548 "nvme_io": false, 00:07:27.548 "nvme_io_md": false, 00:07:27.548 "write_zeroes": true, 00:07:27.548 "zcopy": true, 00:07:27.548 "get_zone_info": false, 00:07:27.548 "zone_management": false, 00:07:27.548 "zone_append": false, 00:07:27.548 "compare": false, 00:07:27.548 "compare_and_write": false, 00:07:27.548 "abort": true, 00:07:27.548 "seek_hole": false, 00:07:27.548 "seek_data": false, 00:07:27.548 "copy": true, 00:07:27.548 "nvme_iov_md": false 00:07:27.548 }, 00:07:27.548 "memory_domains": [ 00:07:27.548 { 00:07:27.548 "dma_device_id": "system", 00:07:27.548 "dma_device_type": 1 00:07:27.548 }, 00:07:27.548 { 00:07:27.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.548 "dma_device_type": 2 00:07:27.548 } 00:07:27.548 ], 00:07:27.548 "driver_specific": {} 00:07:27.548 } 00:07:27.548 ] 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.548 "name": "Existed_Raid", 00:07:27.548 "uuid": "32355959-bc9e-4d9e-8d1d-eb4fb48d1f6b", 00:07:27.548 "strip_size_kb": 64, 00:07:27.548 "state": "online", 00:07:27.548 "raid_level": "concat", 00:07:27.548 "superblock": true, 00:07:27.548 "num_base_bdevs": 2, 00:07:27.548 "num_base_bdevs_discovered": 2, 00:07:27.548 "num_base_bdevs_operational": 2, 00:07:27.548 "base_bdevs_list": [ 00:07:27.548 { 00:07:27.548 "name": "BaseBdev1", 00:07:27.548 "uuid": "5c7ba35a-9f89-4a07-ab13-a22d614f3a32", 00:07:27.548 "is_configured": true, 00:07:27.548 "data_offset": 2048, 00:07:27.548 "data_size": 63488 00:07:27.548 }, 00:07:27.548 { 00:07:27.548 "name": "BaseBdev2", 00:07:27.548 "uuid": "8124f075-2be1-4ac5-a256-fd2935016279", 00:07:27.548 "is_configured": true, 00:07:27.548 "data_offset": 2048, 00:07:27.548 "data_size": 63488 00:07:27.548 } 00:07:27.548 ] 00:07:27.548 }' 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.548 23:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.808 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:27.808 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:27.808 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:27.808 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:27.808 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:27.808 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:27.808 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:27.808 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:27.808 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.808 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.808 [2024-12-06 23:41:39.361202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.068 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.068 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.068 "name": "Existed_Raid", 00:07:28.068 "aliases": [ 00:07:28.068 "32355959-bc9e-4d9e-8d1d-eb4fb48d1f6b" 00:07:28.068 ], 00:07:28.068 "product_name": "Raid Volume", 00:07:28.068 "block_size": 512, 00:07:28.068 "num_blocks": 126976, 00:07:28.068 "uuid": "32355959-bc9e-4d9e-8d1d-eb4fb48d1f6b", 00:07:28.068 "assigned_rate_limits": { 00:07:28.068 "rw_ios_per_sec": 0, 00:07:28.068 "rw_mbytes_per_sec": 0, 00:07:28.068 "r_mbytes_per_sec": 0, 00:07:28.068 "w_mbytes_per_sec": 0 00:07:28.068 }, 00:07:28.068 "claimed": false, 00:07:28.068 "zoned": false, 00:07:28.068 "supported_io_types": { 00:07:28.068 "read": true, 00:07:28.068 "write": true, 00:07:28.068 "unmap": true, 00:07:28.068 "flush": true, 00:07:28.068 "reset": true, 00:07:28.068 "nvme_admin": false, 00:07:28.068 "nvme_io": false, 00:07:28.068 "nvme_io_md": false, 00:07:28.068 "write_zeroes": true, 00:07:28.068 "zcopy": false, 00:07:28.069 "get_zone_info": false, 00:07:28.069 "zone_management": false, 00:07:28.069 "zone_append": false, 00:07:28.069 "compare": false, 00:07:28.069 "compare_and_write": false, 00:07:28.069 "abort": false, 00:07:28.069 "seek_hole": false, 00:07:28.069 "seek_data": false, 00:07:28.069 "copy": false, 00:07:28.069 "nvme_iov_md": false 00:07:28.069 }, 00:07:28.069 "memory_domains": [ 00:07:28.069 { 00:07:28.069 "dma_device_id": "system", 00:07:28.069 "dma_device_type": 1 00:07:28.069 }, 00:07:28.069 { 00:07:28.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.069 "dma_device_type": 2 00:07:28.069 }, 00:07:28.069 { 00:07:28.069 "dma_device_id": "system", 00:07:28.069 "dma_device_type": 1 00:07:28.069 }, 00:07:28.069 { 00:07:28.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.069 "dma_device_type": 2 00:07:28.069 } 00:07:28.069 ], 00:07:28.069 "driver_specific": { 00:07:28.069 "raid": { 00:07:28.069 "uuid": "32355959-bc9e-4d9e-8d1d-eb4fb48d1f6b", 00:07:28.069 "strip_size_kb": 64, 00:07:28.069 "state": "online", 00:07:28.069 "raid_level": "concat", 00:07:28.069 "superblock": true, 00:07:28.069 "num_base_bdevs": 2, 00:07:28.069 "num_base_bdevs_discovered": 2, 00:07:28.069 "num_base_bdevs_operational": 2, 00:07:28.069 "base_bdevs_list": [ 00:07:28.069 { 00:07:28.069 "name": "BaseBdev1", 00:07:28.069 "uuid": "5c7ba35a-9f89-4a07-ab13-a22d614f3a32", 00:07:28.069 "is_configured": true, 00:07:28.069 "data_offset": 2048, 00:07:28.069 "data_size": 63488 00:07:28.069 }, 00:07:28.069 { 00:07:28.069 "name": "BaseBdev2", 00:07:28.069 "uuid": "8124f075-2be1-4ac5-a256-fd2935016279", 00:07:28.069 "is_configured": true, 00:07:28.069 "data_offset": 2048, 00:07:28.069 "data_size": 63488 00:07:28.069 } 00:07:28.069 ] 00:07:28.069 } 00:07:28.069 } 00:07:28.069 }' 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:28.069 BaseBdev2' 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.069 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.069 [2024-12-06 23:41:39.592569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:28.069 [2024-12-06 23:41:39.592603] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.069 [2024-12-06 23:41:39.592655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.329 "name": "Existed_Raid", 00:07:28.329 "uuid": "32355959-bc9e-4d9e-8d1d-eb4fb48d1f6b", 00:07:28.329 "strip_size_kb": 64, 00:07:28.329 "state": "offline", 00:07:28.329 "raid_level": "concat", 00:07:28.329 "superblock": true, 00:07:28.329 "num_base_bdevs": 2, 00:07:28.329 "num_base_bdevs_discovered": 1, 00:07:28.329 "num_base_bdevs_operational": 1, 00:07:28.329 "base_bdevs_list": [ 00:07:28.329 { 00:07:28.329 "name": null, 00:07:28.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.329 "is_configured": false, 00:07:28.329 "data_offset": 0, 00:07:28.329 "data_size": 63488 00:07:28.329 }, 00:07:28.329 { 00:07:28.329 "name": "BaseBdev2", 00:07:28.329 "uuid": "8124f075-2be1-4ac5-a256-fd2935016279", 00:07:28.329 "is_configured": true, 00:07:28.329 "data_offset": 2048, 00:07:28.329 "data_size": 63488 00:07:28.329 } 00:07:28.329 ] 00:07:28.329 }' 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.329 23:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.589 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:28.589 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:28.589 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.589 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.589 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.589 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:28.589 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.589 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:28.589 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:28.589 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:28.589 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.589 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.589 [2024-12-06 23:41:40.130813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:28.589 [2024-12-06 23:41:40.130923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61876 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61876 ']' 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61876 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61876 00:07:28.847 killing process with pid 61876 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61876' 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61876 00:07:28.847 [2024-12-06 23:41:40.326233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.847 23:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61876 00:07:28.847 [2024-12-06 23:41:40.342907] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.227 23:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:30.227 00:07:30.227 real 0m5.040s 00:07:30.227 user 0m7.256s 00:07:30.227 sys 0m0.842s 00:07:30.227 ************************************ 00:07:30.227 END TEST raid_state_function_test_sb 00:07:30.227 ************************************ 00:07:30.227 23:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.227 23:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.227 23:41:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:30.227 23:41:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:30.227 23:41:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.227 23:41:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.227 ************************************ 00:07:30.227 START TEST raid_superblock_test 00:07:30.227 ************************************ 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62128 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62128 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62128 ']' 00:07:30.227 23:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.228 23:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.228 23:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.228 23:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.228 23:41:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.228 [2024-12-06 23:41:41.610898] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:07:30.228 [2024-12-06 23:41:41.611122] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62128 ] 00:07:30.228 [2024-12-06 23:41:41.771232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.486 [2024-12-06 23:41:41.889876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.745 [2024-12-06 23:41:42.090750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.745 [2024-12-06 23:41:42.090856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.005 malloc1 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.005 [2024-12-06 23:41:42.496166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:31.005 [2024-12-06 23:41:42.496287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.005 [2024-12-06 23:41:42.496329] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:31.005 [2024-12-06 23:41:42.496359] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.005 [2024-12-06 23:41:42.498708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.005 [2024-12-06 23:41:42.498778] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:31.005 pt1 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.005 malloc2 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.005 [2024-12-06 23:41:42.556734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:31.005 [2024-12-06 23:41:42.556793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.005 [2024-12-06 23:41:42.556820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:31.005 [2024-12-06 23:41:42.556829] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.005 [2024-12-06 23:41:42.558934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.005 [2024-12-06 23:41:42.559047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:31.005 pt2 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.005 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.264 [2024-12-06 23:41:42.568772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:31.264 [2024-12-06 23:41:42.570545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:31.264 [2024-12-06 23:41:42.570735] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:31.264 [2024-12-06 23:41:42.570750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.264 [2024-12-06 23:41:42.571015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:31.264 [2024-12-06 23:41:42.571172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:31.264 [2024-12-06 23:41:42.571184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:31.264 [2024-12-06 23:41:42.571342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.264 "name": "raid_bdev1", 00:07:31.264 "uuid": "e730e0f7-3abf-4571-af02-c990cc616a1e", 00:07:31.264 "strip_size_kb": 64, 00:07:31.264 "state": "online", 00:07:31.264 "raid_level": "concat", 00:07:31.264 "superblock": true, 00:07:31.264 "num_base_bdevs": 2, 00:07:31.264 "num_base_bdevs_discovered": 2, 00:07:31.264 "num_base_bdevs_operational": 2, 00:07:31.264 "base_bdevs_list": [ 00:07:31.264 { 00:07:31.264 "name": "pt1", 00:07:31.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.264 "is_configured": true, 00:07:31.264 "data_offset": 2048, 00:07:31.264 "data_size": 63488 00:07:31.264 }, 00:07:31.264 { 00:07:31.264 "name": "pt2", 00:07:31.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.264 "is_configured": true, 00:07:31.264 "data_offset": 2048, 00:07:31.264 "data_size": 63488 00:07:31.264 } 00:07:31.264 ] 00:07:31.264 }' 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.264 23:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.524 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:31.524 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:31.524 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:31.524 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:31.524 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:31.524 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:31.524 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:31.524 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.524 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.524 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.524 [2024-12-06 23:41:43.072180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:31.785 "name": "raid_bdev1", 00:07:31.785 "aliases": [ 00:07:31.785 "e730e0f7-3abf-4571-af02-c990cc616a1e" 00:07:31.785 ], 00:07:31.785 "product_name": "Raid Volume", 00:07:31.785 "block_size": 512, 00:07:31.785 "num_blocks": 126976, 00:07:31.785 "uuid": "e730e0f7-3abf-4571-af02-c990cc616a1e", 00:07:31.785 "assigned_rate_limits": { 00:07:31.785 "rw_ios_per_sec": 0, 00:07:31.785 "rw_mbytes_per_sec": 0, 00:07:31.785 "r_mbytes_per_sec": 0, 00:07:31.785 "w_mbytes_per_sec": 0 00:07:31.785 }, 00:07:31.785 "claimed": false, 00:07:31.785 "zoned": false, 00:07:31.785 "supported_io_types": { 00:07:31.785 "read": true, 00:07:31.785 "write": true, 00:07:31.785 "unmap": true, 00:07:31.785 "flush": true, 00:07:31.785 "reset": true, 00:07:31.785 "nvme_admin": false, 00:07:31.785 "nvme_io": false, 00:07:31.785 "nvme_io_md": false, 00:07:31.785 "write_zeroes": true, 00:07:31.785 "zcopy": false, 00:07:31.785 "get_zone_info": false, 00:07:31.785 "zone_management": false, 00:07:31.785 "zone_append": false, 00:07:31.785 "compare": false, 00:07:31.785 "compare_and_write": false, 00:07:31.785 "abort": false, 00:07:31.785 "seek_hole": false, 00:07:31.785 "seek_data": false, 00:07:31.785 "copy": false, 00:07:31.785 "nvme_iov_md": false 00:07:31.785 }, 00:07:31.785 "memory_domains": [ 00:07:31.785 { 00:07:31.785 "dma_device_id": "system", 00:07:31.785 "dma_device_type": 1 00:07:31.785 }, 00:07:31.785 { 00:07:31.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.785 "dma_device_type": 2 00:07:31.785 }, 00:07:31.785 { 00:07:31.785 "dma_device_id": "system", 00:07:31.785 "dma_device_type": 1 00:07:31.785 }, 00:07:31.785 { 00:07:31.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.785 "dma_device_type": 2 00:07:31.785 } 00:07:31.785 ], 00:07:31.785 "driver_specific": { 00:07:31.785 "raid": { 00:07:31.785 "uuid": "e730e0f7-3abf-4571-af02-c990cc616a1e", 00:07:31.785 "strip_size_kb": 64, 00:07:31.785 "state": "online", 00:07:31.785 "raid_level": "concat", 00:07:31.785 "superblock": true, 00:07:31.785 "num_base_bdevs": 2, 00:07:31.785 "num_base_bdevs_discovered": 2, 00:07:31.785 "num_base_bdevs_operational": 2, 00:07:31.785 "base_bdevs_list": [ 00:07:31.785 { 00:07:31.785 "name": "pt1", 00:07:31.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.785 "is_configured": true, 00:07:31.785 "data_offset": 2048, 00:07:31.785 "data_size": 63488 00:07:31.785 }, 00:07:31.785 { 00:07:31.785 "name": "pt2", 00:07:31.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.785 "is_configured": true, 00:07:31.785 "data_offset": 2048, 00:07:31.785 "data_size": 63488 00:07:31.785 } 00:07:31.785 ] 00:07:31.785 } 00:07:31.785 } 00:07:31.785 }' 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:31.785 pt2' 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.785 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.786 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.786 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.786 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:31.786 [2024-12-06 23:41:43.295746] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.786 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.786 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e730e0f7-3abf-4571-af02-c990cc616a1e 00:07:31.786 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e730e0f7-3abf-4571-af02-c990cc616a1e ']' 00:07:31.786 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.786 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.786 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.786 [2024-12-06 23:41:43.343361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.786 [2024-12-06 23:41:43.343427] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.786 [2024-12-06 23:41:43.343549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.786 [2024-12-06 23:41:43.343639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.786 [2024-12-06 23:41:43.343724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.046 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.047 [2024-12-06 23:41:43.471175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:32.047 [2024-12-06 23:41:43.473206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:32.047 [2024-12-06 23:41:43.473314] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:32.047 [2024-12-06 23:41:43.473423] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:32.047 [2024-12-06 23:41:43.473472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.047 [2024-12-06 23:41:43.473486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:32.047 request: 00:07:32.047 { 00:07:32.047 "name": "raid_bdev1", 00:07:32.047 "raid_level": "concat", 00:07:32.047 "base_bdevs": [ 00:07:32.047 "malloc1", 00:07:32.047 "malloc2" 00:07:32.047 ], 00:07:32.047 "strip_size_kb": 64, 00:07:32.047 "superblock": false, 00:07:32.047 "method": "bdev_raid_create", 00:07:32.047 "req_id": 1 00:07:32.047 } 00:07:32.047 Got JSON-RPC error response 00:07:32.047 response: 00:07:32.047 { 00:07:32.047 "code": -17, 00:07:32.047 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:32.047 } 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.047 [2024-12-06 23:41:43.539054] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:32.047 [2024-12-06 23:41:43.539182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.047 [2024-12-06 23:41:43.539221] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:32.047 [2024-12-06 23:41:43.539265] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.047 [2024-12-06 23:41:43.541616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.047 [2024-12-06 23:41:43.541720] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:32.047 [2024-12-06 23:41:43.541842] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:32.047 [2024-12-06 23:41:43.541936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:32.047 pt1 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.047 "name": "raid_bdev1", 00:07:32.047 "uuid": "e730e0f7-3abf-4571-af02-c990cc616a1e", 00:07:32.047 "strip_size_kb": 64, 00:07:32.047 "state": "configuring", 00:07:32.047 "raid_level": "concat", 00:07:32.047 "superblock": true, 00:07:32.047 "num_base_bdevs": 2, 00:07:32.047 "num_base_bdevs_discovered": 1, 00:07:32.047 "num_base_bdevs_operational": 2, 00:07:32.047 "base_bdevs_list": [ 00:07:32.047 { 00:07:32.047 "name": "pt1", 00:07:32.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.047 "is_configured": true, 00:07:32.047 "data_offset": 2048, 00:07:32.047 "data_size": 63488 00:07:32.047 }, 00:07:32.047 { 00:07:32.047 "name": null, 00:07:32.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.047 "is_configured": false, 00:07:32.047 "data_offset": 2048, 00:07:32.047 "data_size": 63488 00:07:32.047 } 00:07:32.047 ] 00:07:32.047 }' 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.047 23:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.616 [2024-12-06 23:41:44.022816] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:32.616 [2024-12-06 23:41:44.022956] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.616 [2024-12-06 23:41:44.022983] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:32.616 [2024-12-06 23:41:44.022993] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.616 [2024-12-06 23:41:44.023476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.616 [2024-12-06 23:41:44.023498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:32.616 [2024-12-06 23:41:44.023576] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:32.616 [2024-12-06 23:41:44.023602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:32.616 [2024-12-06 23:41:44.023727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:32.616 [2024-12-06 23:41:44.023740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:32.616 [2024-12-06 23:41:44.023981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:32.616 [2024-12-06 23:41:44.024129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:32.616 [2024-12-06 23:41:44.024138] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:32.616 [2024-12-06 23:41:44.024280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.616 pt2 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.616 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.616 "name": "raid_bdev1", 00:07:32.616 "uuid": "e730e0f7-3abf-4571-af02-c990cc616a1e", 00:07:32.616 "strip_size_kb": 64, 00:07:32.616 "state": "online", 00:07:32.616 "raid_level": "concat", 00:07:32.616 "superblock": true, 00:07:32.616 "num_base_bdevs": 2, 00:07:32.616 "num_base_bdevs_discovered": 2, 00:07:32.616 "num_base_bdevs_operational": 2, 00:07:32.616 "base_bdevs_list": [ 00:07:32.616 { 00:07:32.616 "name": "pt1", 00:07:32.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.616 "is_configured": true, 00:07:32.616 "data_offset": 2048, 00:07:32.616 "data_size": 63488 00:07:32.616 }, 00:07:32.616 { 00:07:32.616 "name": "pt2", 00:07:32.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.616 "is_configured": true, 00:07:32.616 "data_offset": 2048, 00:07:32.616 "data_size": 63488 00:07:32.616 } 00:07:32.617 ] 00:07:32.617 }' 00:07:32.617 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.617 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.186 [2024-12-06 23:41:44.479054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.186 "name": "raid_bdev1", 00:07:33.186 "aliases": [ 00:07:33.186 "e730e0f7-3abf-4571-af02-c990cc616a1e" 00:07:33.186 ], 00:07:33.186 "product_name": "Raid Volume", 00:07:33.186 "block_size": 512, 00:07:33.186 "num_blocks": 126976, 00:07:33.186 "uuid": "e730e0f7-3abf-4571-af02-c990cc616a1e", 00:07:33.186 "assigned_rate_limits": { 00:07:33.186 "rw_ios_per_sec": 0, 00:07:33.186 "rw_mbytes_per_sec": 0, 00:07:33.186 "r_mbytes_per_sec": 0, 00:07:33.186 "w_mbytes_per_sec": 0 00:07:33.186 }, 00:07:33.186 "claimed": false, 00:07:33.186 "zoned": false, 00:07:33.186 "supported_io_types": { 00:07:33.186 "read": true, 00:07:33.186 "write": true, 00:07:33.186 "unmap": true, 00:07:33.186 "flush": true, 00:07:33.186 "reset": true, 00:07:33.186 "nvme_admin": false, 00:07:33.186 "nvme_io": false, 00:07:33.186 "nvme_io_md": false, 00:07:33.186 "write_zeroes": true, 00:07:33.186 "zcopy": false, 00:07:33.186 "get_zone_info": false, 00:07:33.186 "zone_management": false, 00:07:33.186 "zone_append": false, 00:07:33.186 "compare": false, 00:07:33.186 "compare_and_write": false, 00:07:33.186 "abort": false, 00:07:33.186 "seek_hole": false, 00:07:33.186 "seek_data": false, 00:07:33.186 "copy": false, 00:07:33.186 "nvme_iov_md": false 00:07:33.186 }, 00:07:33.186 "memory_domains": [ 00:07:33.186 { 00:07:33.186 "dma_device_id": "system", 00:07:33.186 "dma_device_type": 1 00:07:33.186 }, 00:07:33.186 { 00:07:33.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.186 "dma_device_type": 2 00:07:33.186 }, 00:07:33.186 { 00:07:33.186 "dma_device_id": "system", 00:07:33.186 "dma_device_type": 1 00:07:33.186 }, 00:07:33.186 { 00:07:33.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.186 "dma_device_type": 2 00:07:33.186 } 00:07:33.186 ], 00:07:33.186 "driver_specific": { 00:07:33.186 "raid": { 00:07:33.186 "uuid": "e730e0f7-3abf-4571-af02-c990cc616a1e", 00:07:33.186 "strip_size_kb": 64, 00:07:33.186 "state": "online", 00:07:33.186 "raid_level": "concat", 00:07:33.186 "superblock": true, 00:07:33.186 "num_base_bdevs": 2, 00:07:33.186 "num_base_bdevs_discovered": 2, 00:07:33.186 "num_base_bdevs_operational": 2, 00:07:33.186 "base_bdevs_list": [ 00:07:33.186 { 00:07:33.186 "name": "pt1", 00:07:33.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.186 "is_configured": true, 00:07:33.186 "data_offset": 2048, 00:07:33.186 "data_size": 63488 00:07:33.186 }, 00:07:33.186 { 00:07:33.186 "name": "pt2", 00:07:33.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.186 "is_configured": true, 00:07:33.186 "data_offset": 2048, 00:07:33.186 "data_size": 63488 00:07:33.186 } 00:07:33.186 ] 00:07:33.186 } 00:07:33.186 } 00:07:33.186 }' 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:33.186 pt2' 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.186 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.187 [2024-12-06 23:41:44.686994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e730e0f7-3abf-4571-af02-c990cc616a1e '!=' e730e0f7-3abf-4571-af02-c990cc616a1e ']' 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62128 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62128 ']' 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62128 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.187 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62128 00:07:33.446 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.446 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.446 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62128' 00:07:33.446 killing process with pid 62128 00:07:33.446 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62128 00:07:33.446 [2024-12-06 23:41:44.770429] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.446 [2024-12-06 23:41:44.770511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.447 [2024-12-06 23:41:44.770561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.447 [2024-12-06 23:41:44.770573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:33.447 23:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62128 00:07:33.447 [2024-12-06 23:41:44.975275] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.828 23:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:34.828 00:07:34.828 real 0m4.538s 00:07:34.828 user 0m6.447s 00:07:34.828 sys 0m0.739s 00:07:34.828 23:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.828 ************************************ 00:07:34.828 END TEST raid_superblock_test 00:07:34.828 ************************************ 00:07:34.828 23:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.828 23:41:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:34.828 23:41:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:34.828 23:41:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.828 23:41:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.828 ************************************ 00:07:34.828 START TEST raid_read_error_test 00:07:34.828 ************************************ 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mRLePJ7a4m 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62340 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62340 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62340 ']' 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.828 23:41:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.828 [2024-12-06 23:41:46.229868] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:07:34.828 [2024-12-06 23:41:46.230064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62340 ] 00:07:35.089 [2024-12-06 23:41:46.406088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.089 [2024-12-06 23:41:46.519721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.363 [2024-12-06 23:41:46.718656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.363 [2024-12-06 23:41:46.718811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.622 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.622 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:35.622 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.622 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.622 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.622 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.622 BaseBdev1_malloc 00:07:35.622 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.622 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.622 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.622 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.622 true 00:07:35.622 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.622 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.623 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.623 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.623 [2024-12-06 23:41:47.142890] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.623 [2024-12-06 23:41:47.142991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.623 [2024-12-06 23:41:47.143014] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.623 [2024-12-06 23:41:47.143025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.623 [2024-12-06 23:41:47.145092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.623 [2024-12-06 23:41:47.145130] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.623 BaseBdev1 00:07:35.623 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.623 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.623 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.623 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.623 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.883 BaseBdev2_malloc 00:07:35.883 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.883 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:35.883 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.883 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.883 true 00:07:35.883 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.883 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.883 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.884 [2024-12-06 23:41:47.210407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.884 [2024-12-06 23:41:47.210500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.884 [2024-12-06 23:41:47.210549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:35.884 [2024-12-06 23:41:47.210578] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.884 [2024-12-06 23:41:47.212585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.884 [2024-12-06 23:41:47.212670] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.884 BaseBdev2 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.884 [2024-12-06 23:41:47.222446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.884 [2024-12-06 23:41:47.224258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.884 [2024-12-06 23:41:47.224449] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:35.884 [2024-12-06 23:41:47.224463] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.884 [2024-12-06 23:41:47.224691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:35.884 [2024-12-06 23:41:47.224870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:35.884 [2024-12-06 23:41:47.224882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:35.884 [2024-12-06 23:41:47.225042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.884 "name": "raid_bdev1", 00:07:35.884 "uuid": "881bb9b7-a547-432c-ae14-4b998297e312", 00:07:35.884 "strip_size_kb": 64, 00:07:35.884 "state": "online", 00:07:35.884 "raid_level": "concat", 00:07:35.884 "superblock": true, 00:07:35.884 "num_base_bdevs": 2, 00:07:35.884 "num_base_bdevs_discovered": 2, 00:07:35.884 "num_base_bdevs_operational": 2, 00:07:35.884 "base_bdevs_list": [ 00:07:35.884 { 00:07:35.884 "name": "BaseBdev1", 00:07:35.884 "uuid": "42113aee-ac1c-58e6-a8f1-25bff77480ce", 00:07:35.884 "is_configured": true, 00:07:35.884 "data_offset": 2048, 00:07:35.884 "data_size": 63488 00:07:35.884 }, 00:07:35.884 { 00:07:35.884 "name": "BaseBdev2", 00:07:35.884 "uuid": "d7bd9819-f4c1-5238-976f-d19b505312a1", 00:07:35.884 "is_configured": true, 00:07:35.884 "data_offset": 2048, 00:07:35.884 "data_size": 63488 00:07:35.884 } 00:07:35.884 ] 00:07:35.884 }' 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.884 23:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.144 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:36.144 23:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:36.403 [2024-12-06 23:41:47.738777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.341 "name": "raid_bdev1", 00:07:37.341 "uuid": "881bb9b7-a547-432c-ae14-4b998297e312", 00:07:37.341 "strip_size_kb": 64, 00:07:37.341 "state": "online", 00:07:37.341 "raid_level": "concat", 00:07:37.341 "superblock": true, 00:07:37.341 "num_base_bdevs": 2, 00:07:37.341 "num_base_bdevs_discovered": 2, 00:07:37.341 "num_base_bdevs_operational": 2, 00:07:37.341 "base_bdevs_list": [ 00:07:37.341 { 00:07:37.341 "name": "BaseBdev1", 00:07:37.341 "uuid": "42113aee-ac1c-58e6-a8f1-25bff77480ce", 00:07:37.341 "is_configured": true, 00:07:37.341 "data_offset": 2048, 00:07:37.341 "data_size": 63488 00:07:37.341 }, 00:07:37.341 { 00:07:37.341 "name": "BaseBdev2", 00:07:37.341 "uuid": "d7bd9819-f4c1-5238-976f-d19b505312a1", 00:07:37.341 "is_configured": true, 00:07:37.341 "data_offset": 2048, 00:07:37.341 "data_size": 63488 00:07:37.341 } 00:07:37.341 ] 00:07:37.341 }' 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.341 23:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.602 [2024-12-06 23:41:49.094483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.602 [2024-12-06 23:41:49.094519] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.602 [2024-12-06 23:41:49.097251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.602 { 00:07:37.602 "results": [ 00:07:37.602 { 00:07:37.602 "job": "raid_bdev1", 00:07:37.602 "core_mask": "0x1", 00:07:37.602 "workload": "randrw", 00:07:37.602 "percentage": 50, 00:07:37.602 "status": "finished", 00:07:37.602 "queue_depth": 1, 00:07:37.602 "io_size": 131072, 00:07:37.602 "runtime": 1.356678, 00:07:37.602 "iops": 16101.093995774974, 00:07:37.602 "mibps": 2012.6367494718718, 00:07:37.602 "io_failed": 1, 00:07:37.602 "io_timeout": 0, 00:07:37.602 "avg_latency_us": 85.91434827151598, 00:07:37.602 "min_latency_us": 26.047161572052403, 00:07:37.602 "max_latency_us": 1430.9170305676855 00:07:37.602 } 00:07:37.602 ], 00:07:37.602 "core_count": 1 00:07:37.602 } 00:07:37.602 [2024-12-06 23:41:49.097373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.602 [2024-12-06 23:41:49.097418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.602 [2024-12-06 23:41:49.097433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62340 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62340 ']' 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62340 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62340 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62340' 00:07:37.602 killing process with pid 62340 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62340 00:07:37.602 [2024-12-06 23:41:49.148261] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.602 23:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62340 00:07:37.862 [2024-12-06 23:41:49.284126] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.244 23:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mRLePJ7a4m 00:07:39.244 23:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:39.244 23:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:39.244 23:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:39.244 23:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:39.244 23:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.244 23:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.244 23:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:39.244 00:07:39.244 real 0m4.320s 00:07:39.244 user 0m5.144s 00:07:39.244 sys 0m0.530s 00:07:39.244 ************************************ 00:07:39.244 END TEST raid_read_error_test 00:07:39.244 ************************************ 00:07:39.244 23:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.244 23:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.244 23:41:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:39.244 23:41:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:39.244 23:41:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.244 23:41:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.244 ************************************ 00:07:39.244 START TEST raid_write_error_test 00:07:39.244 ************************************ 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cRsZG9Hmjc 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62480 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62480 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62480 ']' 00:07:39.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.244 23:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.244 [2024-12-06 23:41:50.618242] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:07:39.244 [2024-12-06 23:41:50.618355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62480 ] 00:07:39.244 [2024-12-06 23:41:50.793823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.504 [2024-12-06 23:41:50.909005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.764 [2024-12-06 23:41:51.108499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.764 [2024-12-06 23:41:51.108561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.024 BaseBdev1_malloc 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.024 true 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.024 [2024-12-06 23:41:51.502404] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:40.024 [2024-12-06 23:41:51.502460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.024 [2024-12-06 23:41:51.502481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:40.024 [2024-12-06 23:41:51.502492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.024 [2024-12-06 23:41:51.504542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.024 [2024-12-06 23:41:51.504583] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:40.024 BaseBdev1 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.024 BaseBdev2_malloc 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.024 true 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.024 [2024-12-06 23:41:51.568005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:40.024 [2024-12-06 23:41:51.568056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.024 [2024-12-06 23:41:51.568074] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:40.024 [2024-12-06 23:41:51.568084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.024 [2024-12-06 23:41:51.570153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.024 [2024-12-06 23:41:51.570259] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:40.024 BaseBdev2 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.024 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.024 [2024-12-06 23:41:51.580048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.024 [2024-12-06 23:41:51.581847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:40.024 [2024-12-06 23:41:51.582039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:40.024 [2024-12-06 23:41:51.582055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:40.024 [2024-12-06 23:41:51.582287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:40.024 [2024-12-06 23:41:51.582453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:40.024 [2024-12-06 23:41:51.582465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:40.024 [2024-12-06 23:41:51.582603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.285 "name": "raid_bdev1", 00:07:40.285 "uuid": "ff561c74-ac18-4d76-b335-f85a563f79ce", 00:07:40.285 "strip_size_kb": 64, 00:07:40.285 "state": "online", 00:07:40.285 "raid_level": "concat", 00:07:40.285 "superblock": true, 00:07:40.285 "num_base_bdevs": 2, 00:07:40.285 "num_base_bdevs_discovered": 2, 00:07:40.285 "num_base_bdevs_operational": 2, 00:07:40.285 "base_bdevs_list": [ 00:07:40.285 { 00:07:40.285 "name": "BaseBdev1", 00:07:40.285 "uuid": "8bdd51aa-a131-59bd-90c8-eda4ecfa98af", 00:07:40.285 "is_configured": true, 00:07:40.285 "data_offset": 2048, 00:07:40.285 "data_size": 63488 00:07:40.285 }, 00:07:40.285 { 00:07:40.285 "name": "BaseBdev2", 00:07:40.285 "uuid": "14ff0bd9-e2b4-5ecc-bae9-dd2548ae5953", 00:07:40.285 "is_configured": true, 00:07:40.285 "data_offset": 2048, 00:07:40.285 "data_size": 63488 00:07:40.285 } 00:07:40.285 ] 00:07:40.285 }' 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.285 23:41:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.545 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:40.545 23:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:40.545 [2024-12-06 23:41:52.060701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:41.485 23:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:41.485 23:41:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.485 23:41:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.485 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.745 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.745 "name": "raid_bdev1", 00:07:41.745 "uuid": "ff561c74-ac18-4d76-b335-f85a563f79ce", 00:07:41.745 "strip_size_kb": 64, 00:07:41.745 "state": "online", 00:07:41.745 "raid_level": "concat", 00:07:41.745 "superblock": true, 00:07:41.745 "num_base_bdevs": 2, 00:07:41.745 "num_base_bdevs_discovered": 2, 00:07:41.746 "num_base_bdevs_operational": 2, 00:07:41.746 "base_bdevs_list": [ 00:07:41.746 { 00:07:41.746 "name": "BaseBdev1", 00:07:41.746 "uuid": "8bdd51aa-a131-59bd-90c8-eda4ecfa98af", 00:07:41.746 "is_configured": true, 00:07:41.746 "data_offset": 2048, 00:07:41.746 "data_size": 63488 00:07:41.746 }, 00:07:41.746 { 00:07:41.746 "name": "BaseBdev2", 00:07:41.746 "uuid": "14ff0bd9-e2b4-5ecc-bae9-dd2548ae5953", 00:07:41.746 "is_configured": true, 00:07:41.746 "data_offset": 2048, 00:07:41.746 "data_size": 63488 00:07:41.746 } 00:07:41.746 ] 00:07:41.746 }' 00:07:41.746 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.746 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.006 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:42.006 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.006 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.006 [2024-12-06 23:41:53.483468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:42.006 [2024-12-06 23:41:53.483593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:42.006 [2024-12-06 23:41:53.486579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.006 [2024-12-06 23:41:53.486619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.006 [2024-12-06 23:41:53.486649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.006 [2024-12-06 23:41:53.486720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:42.006 { 00:07:42.006 "results": [ 00:07:42.006 { 00:07:42.006 "job": "raid_bdev1", 00:07:42.006 "core_mask": "0x1", 00:07:42.006 "workload": "randrw", 00:07:42.006 "percentage": 50, 00:07:42.006 "status": "finished", 00:07:42.006 "queue_depth": 1, 00:07:42.006 "io_size": 131072, 00:07:42.006 "runtime": 1.423853, 00:07:42.006 "iops": 16186.361934834566, 00:07:42.006 "mibps": 2023.2952418543207, 00:07:42.006 "io_failed": 1, 00:07:42.006 "io_timeout": 0, 00:07:42.006 "avg_latency_us": 85.3511526353204, 00:07:42.006 "min_latency_us": 25.6, 00:07:42.006 "max_latency_us": 1430.9170305676855 00:07:42.006 } 00:07:42.006 ], 00:07:42.006 "core_count": 1 00:07:42.006 } 00:07:42.006 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.006 23:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62480 00:07:42.007 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62480 ']' 00:07:42.007 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62480 00:07:42.007 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:42.007 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.007 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62480 00:07:42.007 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.007 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.007 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62480' 00:07:42.007 killing process with pid 62480 00:07:42.007 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62480 00:07:42.007 [2024-12-06 23:41:53.539933] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.007 23:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62480 00:07:42.267 [2024-12-06 23:41:53.670523] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.653 23:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cRsZG9Hmjc 00:07:43.653 23:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:43.653 23:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:43.653 23:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:07:43.653 23:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:43.653 23:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.653 23:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.653 ************************************ 00:07:43.653 END TEST raid_write_error_test 00:07:43.653 ************************************ 00:07:43.653 23:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:07:43.653 00:07:43.653 real 0m4.300s 00:07:43.653 user 0m5.121s 00:07:43.653 sys 0m0.553s 00:07:43.653 23:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.653 23:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.653 23:41:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:43.653 23:41:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:43.653 23:41:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:43.653 23:41:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.653 23:41:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.653 ************************************ 00:07:43.653 START TEST raid_state_function_test 00:07:43.653 ************************************ 00:07:43.653 23:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:43.653 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:43.653 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:43.653 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:43.653 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:43.653 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:43.653 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.653 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62622 00:07:43.654 Process raid pid: 62622 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62622' 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62622 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62622 ']' 00:07:43.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.654 23:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.654 [2024-12-06 23:41:54.976682] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:07:43.654 [2024-12-06 23:41:54.976907] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.654 [2024-12-06 23:41:55.148964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.915 [2024-12-06 23:41:55.262507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.915 [2024-12-06 23:41:55.465424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.915 [2024-12-06 23:41:55.465554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.482 23:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.482 23:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:44.482 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.482 23:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.482 23:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.482 [2024-12-06 23:41:55.815012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.482 [2024-12-06 23:41:55.815140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.482 [2024-12-06 23:41:55.815155] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.482 [2024-12-06 23:41:55.815165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.482 23:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.482 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.482 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.483 "name": "Existed_Raid", 00:07:44.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.483 "strip_size_kb": 0, 00:07:44.483 "state": "configuring", 00:07:44.483 "raid_level": "raid1", 00:07:44.483 "superblock": false, 00:07:44.483 "num_base_bdevs": 2, 00:07:44.483 "num_base_bdevs_discovered": 0, 00:07:44.483 "num_base_bdevs_operational": 2, 00:07:44.483 "base_bdevs_list": [ 00:07:44.483 { 00:07:44.483 "name": "BaseBdev1", 00:07:44.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.483 "is_configured": false, 00:07:44.483 "data_offset": 0, 00:07:44.483 "data_size": 0 00:07:44.483 }, 00:07:44.483 { 00:07:44.483 "name": "BaseBdev2", 00:07:44.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.483 "is_configured": false, 00:07:44.483 "data_offset": 0, 00:07:44.483 "data_size": 0 00:07:44.483 } 00:07:44.483 ] 00:07:44.483 }' 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.483 23:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.741 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.741 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.741 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.741 [2024-12-06 23:41:56.286850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.741 [2024-12-06 23:41:56.286948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:44.741 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.741 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.741 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.741 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.741 [2024-12-06 23:41:56.298798] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.741 [2024-12-06 23:41:56.298878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.741 [2024-12-06 23:41:56.298905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.741 [2024-12-06 23:41:56.298929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.001 [2024-12-06 23:41:56.346395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.001 BaseBdev1 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.001 [ 00:07:45.001 { 00:07:45.001 "name": "BaseBdev1", 00:07:45.001 "aliases": [ 00:07:45.001 "312185d7-960a-4dbd-9009-8045f269e69f" 00:07:45.001 ], 00:07:45.001 "product_name": "Malloc disk", 00:07:45.001 "block_size": 512, 00:07:45.001 "num_blocks": 65536, 00:07:45.001 "uuid": "312185d7-960a-4dbd-9009-8045f269e69f", 00:07:45.001 "assigned_rate_limits": { 00:07:45.001 "rw_ios_per_sec": 0, 00:07:45.001 "rw_mbytes_per_sec": 0, 00:07:45.001 "r_mbytes_per_sec": 0, 00:07:45.001 "w_mbytes_per_sec": 0 00:07:45.001 }, 00:07:45.001 "claimed": true, 00:07:45.001 "claim_type": "exclusive_write", 00:07:45.001 "zoned": false, 00:07:45.001 "supported_io_types": { 00:07:45.001 "read": true, 00:07:45.001 "write": true, 00:07:45.001 "unmap": true, 00:07:45.001 "flush": true, 00:07:45.001 "reset": true, 00:07:45.001 "nvme_admin": false, 00:07:45.001 "nvme_io": false, 00:07:45.001 "nvme_io_md": false, 00:07:45.001 "write_zeroes": true, 00:07:45.001 "zcopy": true, 00:07:45.001 "get_zone_info": false, 00:07:45.001 "zone_management": false, 00:07:45.001 "zone_append": false, 00:07:45.001 "compare": false, 00:07:45.001 "compare_and_write": false, 00:07:45.001 "abort": true, 00:07:45.001 "seek_hole": false, 00:07:45.001 "seek_data": false, 00:07:45.001 "copy": true, 00:07:45.001 "nvme_iov_md": false 00:07:45.001 }, 00:07:45.001 "memory_domains": [ 00:07:45.001 { 00:07:45.001 "dma_device_id": "system", 00:07:45.001 "dma_device_type": 1 00:07:45.001 }, 00:07:45.001 { 00:07:45.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.001 "dma_device_type": 2 00:07:45.001 } 00:07:45.001 ], 00:07:45.001 "driver_specific": {} 00:07:45.001 } 00:07:45.001 ] 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.001 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.001 "name": "Existed_Raid", 00:07:45.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.001 "strip_size_kb": 0, 00:07:45.001 "state": "configuring", 00:07:45.001 "raid_level": "raid1", 00:07:45.001 "superblock": false, 00:07:45.001 "num_base_bdevs": 2, 00:07:45.001 "num_base_bdevs_discovered": 1, 00:07:45.002 "num_base_bdevs_operational": 2, 00:07:45.002 "base_bdevs_list": [ 00:07:45.002 { 00:07:45.002 "name": "BaseBdev1", 00:07:45.002 "uuid": "312185d7-960a-4dbd-9009-8045f269e69f", 00:07:45.002 "is_configured": true, 00:07:45.002 "data_offset": 0, 00:07:45.002 "data_size": 65536 00:07:45.002 }, 00:07:45.002 { 00:07:45.002 "name": "BaseBdev2", 00:07:45.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.002 "is_configured": false, 00:07:45.002 "data_offset": 0, 00:07:45.002 "data_size": 0 00:07:45.002 } 00:07:45.002 ] 00:07:45.002 }' 00:07:45.002 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.002 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.261 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.261 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.261 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.261 [2024-12-06 23:41:56.817656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.261 [2024-12-06 23:41:56.817794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.519 [2024-12-06 23:41:56.825690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.519 [2024-12-06 23:41:56.827627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.519 [2024-12-06 23:41:56.827689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.519 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.520 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.520 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.520 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.520 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.520 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.520 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.520 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.520 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.520 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.520 "name": "Existed_Raid", 00:07:45.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.520 "strip_size_kb": 0, 00:07:45.520 "state": "configuring", 00:07:45.520 "raid_level": "raid1", 00:07:45.520 "superblock": false, 00:07:45.520 "num_base_bdevs": 2, 00:07:45.520 "num_base_bdevs_discovered": 1, 00:07:45.520 "num_base_bdevs_operational": 2, 00:07:45.520 "base_bdevs_list": [ 00:07:45.520 { 00:07:45.520 "name": "BaseBdev1", 00:07:45.520 "uuid": "312185d7-960a-4dbd-9009-8045f269e69f", 00:07:45.520 "is_configured": true, 00:07:45.520 "data_offset": 0, 00:07:45.520 "data_size": 65536 00:07:45.520 }, 00:07:45.520 { 00:07:45.520 "name": "BaseBdev2", 00:07:45.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.520 "is_configured": false, 00:07:45.520 "data_offset": 0, 00:07:45.520 "data_size": 0 00:07:45.520 } 00:07:45.520 ] 00:07:45.520 }' 00:07:45.520 23:41:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.520 23:41:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.779 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:45.779 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.779 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.779 [2024-12-06 23:41:57.335930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:45.779 [2024-12-06 23:41:57.336051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:45.779 [2024-12-06 23:41:57.336076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:45.779 [2024-12-06 23:41:57.336372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:45.779 [2024-12-06 23:41:57.336583] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:45.779 [2024-12-06 23:41:57.336630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:45.779 [2024-12-06 23:41:57.336947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.779 BaseBdev2 00:07:45.779 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.779 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:45.779 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:45.779 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.779 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:45.779 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.038 [ 00:07:46.038 { 00:07:46.038 "name": "BaseBdev2", 00:07:46.038 "aliases": [ 00:07:46.038 "22fead5f-1cdb-407e-9733-459f810b01f8" 00:07:46.038 ], 00:07:46.038 "product_name": "Malloc disk", 00:07:46.038 "block_size": 512, 00:07:46.038 "num_blocks": 65536, 00:07:46.038 "uuid": "22fead5f-1cdb-407e-9733-459f810b01f8", 00:07:46.038 "assigned_rate_limits": { 00:07:46.038 "rw_ios_per_sec": 0, 00:07:46.038 "rw_mbytes_per_sec": 0, 00:07:46.038 "r_mbytes_per_sec": 0, 00:07:46.038 "w_mbytes_per_sec": 0 00:07:46.038 }, 00:07:46.038 "claimed": true, 00:07:46.038 "claim_type": "exclusive_write", 00:07:46.038 "zoned": false, 00:07:46.038 "supported_io_types": { 00:07:46.038 "read": true, 00:07:46.038 "write": true, 00:07:46.038 "unmap": true, 00:07:46.038 "flush": true, 00:07:46.038 "reset": true, 00:07:46.038 "nvme_admin": false, 00:07:46.038 "nvme_io": false, 00:07:46.038 "nvme_io_md": false, 00:07:46.038 "write_zeroes": true, 00:07:46.038 "zcopy": true, 00:07:46.038 "get_zone_info": false, 00:07:46.038 "zone_management": false, 00:07:46.038 "zone_append": false, 00:07:46.038 "compare": false, 00:07:46.038 "compare_and_write": false, 00:07:46.038 "abort": true, 00:07:46.038 "seek_hole": false, 00:07:46.038 "seek_data": false, 00:07:46.038 "copy": true, 00:07:46.038 "nvme_iov_md": false 00:07:46.038 }, 00:07:46.038 "memory_domains": [ 00:07:46.038 { 00:07:46.038 "dma_device_id": "system", 00:07:46.038 "dma_device_type": 1 00:07:46.038 }, 00:07:46.038 { 00:07:46.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.038 "dma_device_type": 2 00:07:46.038 } 00:07:46.038 ], 00:07:46.038 "driver_specific": {} 00:07:46.038 } 00:07:46.038 ] 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:46.038 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.039 "name": "Existed_Raid", 00:07:46.039 "uuid": "eaa3d2f9-230b-4b16-8f60-3d091815df8b", 00:07:46.039 "strip_size_kb": 0, 00:07:46.039 "state": "online", 00:07:46.039 "raid_level": "raid1", 00:07:46.039 "superblock": false, 00:07:46.039 "num_base_bdevs": 2, 00:07:46.039 "num_base_bdevs_discovered": 2, 00:07:46.039 "num_base_bdevs_operational": 2, 00:07:46.039 "base_bdevs_list": [ 00:07:46.039 { 00:07:46.039 "name": "BaseBdev1", 00:07:46.039 "uuid": "312185d7-960a-4dbd-9009-8045f269e69f", 00:07:46.039 "is_configured": true, 00:07:46.039 "data_offset": 0, 00:07:46.039 "data_size": 65536 00:07:46.039 }, 00:07:46.039 { 00:07:46.039 "name": "BaseBdev2", 00:07:46.039 "uuid": "22fead5f-1cdb-407e-9733-459f810b01f8", 00:07:46.039 "is_configured": true, 00:07:46.039 "data_offset": 0, 00:07:46.039 "data_size": 65536 00:07:46.039 } 00:07:46.039 ] 00:07:46.039 }' 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.039 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.297 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:46.297 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:46.297 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.297 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.297 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.297 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.297 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:46.297 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.297 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.297 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.297 [2024-12-06 23:41:57.799480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.297 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.297 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.297 "name": "Existed_Raid", 00:07:46.297 "aliases": [ 00:07:46.297 "eaa3d2f9-230b-4b16-8f60-3d091815df8b" 00:07:46.297 ], 00:07:46.297 "product_name": "Raid Volume", 00:07:46.297 "block_size": 512, 00:07:46.297 "num_blocks": 65536, 00:07:46.297 "uuid": "eaa3d2f9-230b-4b16-8f60-3d091815df8b", 00:07:46.297 "assigned_rate_limits": { 00:07:46.297 "rw_ios_per_sec": 0, 00:07:46.297 "rw_mbytes_per_sec": 0, 00:07:46.297 "r_mbytes_per_sec": 0, 00:07:46.297 "w_mbytes_per_sec": 0 00:07:46.297 }, 00:07:46.297 "claimed": false, 00:07:46.297 "zoned": false, 00:07:46.297 "supported_io_types": { 00:07:46.297 "read": true, 00:07:46.297 "write": true, 00:07:46.297 "unmap": false, 00:07:46.297 "flush": false, 00:07:46.297 "reset": true, 00:07:46.297 "nvme_admin": false, 00:07:46.297 "nvme_io": false, 00:07:46.297 "nvme_io_md": false, 00:07:46.297 "write_zeroes": true, 00:07:46.297 "zcopy": false, 00:07:46.297 "get_zone_info": false, 00:07:46.297 "zone_management": false, 00:07:46.297 "zone_append": false, 00:07:46.297 "compare": false, 00:07:46.297 "compare_and_write": false, 00:07:46.297 "abort": false, 00:07:46.297 "seek_hole": false, 00:07:46.297 "seek_data": false, 00:07:46.297 "copy": false, 00:07:46.297 "nvme_iov_md": false 00:07:46.297 }, 00:07:46.297 "memory_domains": [ 00:07:46.297 { 00:07:46.297 "dma_device_id": "system", 00:07:46.297 "dma_device_type": 1 00:07:46.297 }, 00:07:46.297 { 00:07:46.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.297 "dma_device_type": 2 00:07:46.297 }, 00:07:46.297 { 00:07:46.297 "dma_device_id": "system", 00:07:46.297 "dma_device_type": 1 00:07:46.297 }, 00:07:46.297 { 00:07:46.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.297 "dma_device_type": 2 00:07:46.297 } 00:07:46.297 ], 00:07:46.297 "driver_specific": { 00:07:46.297 "raid": { 00:07:46.297 "uuid": "eaa3d2f9-230b-4b16-8f60-3d091815df8b", 00:07:46.297 "strip_size_kb": 0, 00:07:46.297 "state": "online", 00:07:46.297 "raid_level": "raid1", 00:07:46.297 "superblock": false, 00:07:46.298 "num_base_bdevs": 2, 00:07:46.298 "num_base_bdevs_discovered": 2, 00:07:46.298 "num_base_bdevs_operational": 2, 00:07:46.298 "base_bdevs_list": [ 00:07:46.298 { 00:07:46.298 "name": "BaseBdev1", 00:07:46.298 "uuid": "312185d7-960a-4dbd-9009-8045f269e69f", 00:07:46.298 "is_configured": true, 00:07:46.298 "data_offset": 0, 00:07:46.298 "data_size": 65536 00:07:46.298 }, 00:07:46.298 { 00:07:46.298 "name": "BaseBdev2", 00:07:46.298 "uuid": "22fead5f-1cdb-407e-9733-459f810b01f8", 00:07:46.298 "is_configured": true, 00:07:46.298 "data_offset": 0, 00:07:46.298 "data_size": 65536 00:07:46.298 } 00:07:46.298 ] 00:07:46.298 } 00:07:46.298 } 00:07:46.298 }' 00:07:46.298 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:46.557 BaseBdev2' 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.557 23:41:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 [2024-12-06 23:41:58.010857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.816 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.816 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.816 "name": "Existed_Raid", 00:07:46.816 "uuid": "eaa3d2f9-230b-4b16-8f60-3d091815df8b", 00:07:46.816 "strip_size_kb": 0, 00:07:46.816 "state": "online", 00:07:46.816 "raid_level": "raid1", 00:07:46.816 "superblock": false, 00:07:46.816 "num_base_bdevs": 2, 00:07:46.816 "num_base_bdevs_discovered": 1, 00:07:46.816 "num_base_bdevs_operational": 1, 00:07:46.816 "base_bdevs_list": [ 00:07:46.816 { 00:07:46.816 "name": null, 00:07:46.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.816 "is_configured": false, 00:07:46.816 "data_offset": 0, 00:07:46.816 "data_size": 65536 00:07:46.816 }, 00:07:46.816 { 00:07:46.816 "name": "BaseBdev2", 00:07:46.816 "uuid": "22fead5f-1cdb-407e-9733-459f810b01f8", 00:07:46.816 "is_configured": true, 00:07:46.816 "data_offset": 0, 00:07:46.816 "data_size": 65536 00:07:46.816 } 00:07:46.816 ] 00:07:46.816 }' 00:07:46.816 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.816 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.076 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:47.076 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:47.076 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.076 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.076 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.076 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:47.076 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.076 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:47.076 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:47.076 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:47.076 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.076 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.076 [2024-12-06 23:41:58.574844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:47.076 [2024-12-06 23:41:58.575004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.334 [2024-12-06 23:41:58.668196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.334 [2024-12-06 23:41:58.668342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.334 [2024-12-06 23:41:58.668384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62622 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62622 ']' 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62622 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62622 00:07:47.334 killing process with pid 62622 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62622' 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62622 00:07:47.334 [2024-12-06 23:41:58.758964] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.334 23:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62622 00:07:47.334 [2024-12-06 23:41:58.774534] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.712 00:07:48.712 real 0m4.980s 00:07:48.712 user 0m7.216s 00:07:48.712 sys 0m0.787s 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.712 ************************************ 00:07:48.712 END TEST raid_state_function_test 00:07:48.712 ************************************ 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.712 23:41:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:48.712 23:41:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:48.712 23:41:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.712 23:41:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.712 ************************************ 00:07:48.712 START TEST raid_state_function_test_sb 00:07:48.712 ************************************ 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62871 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62871' 00:07:48.712 Process raid pid: 62871 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62871 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62871 ']' 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.712 23:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.712 [2024-12-06 23:42:00.015584] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:07:48.713 [2024-12-06 23:42:00.015733] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.713 [2024-12-06 23:42:00.193085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.972 [2024-12-06 23:42:00.314488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.972 [2024-12-06 23:42:00.518140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.972 [2024-12-06 23:42:00.518199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.542 [2024-12-06 23:42:00.856441] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.542 [2024-12-06 23:42:00.856500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.542 [2024-12-06 23:42:00.856511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.542 [2024-12-06 23:42:00.856521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.542 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.542 "name": "Existed_Raid", 00:07:49.542 "uuid": "08680a19-0d43-47e6-9a49-974f26aed73e", 00:07:49.542 "strip_size_kb": 0, 00:07:49.542 "state": "configuring", 00:07:49.542 "raid_level": "raid1", 00:07:49.542 "superblock": true, 00:07:49.542 "num_base_bdevs": 2, 00:07:49.542 "num_base_bdevs_discovered": 0, 00:07:49.542 "num_base_bdevs_operational": 2, 00:07:49.542 "base_bdevs_list": [ 00:07:49.542 { 00:07:49.542 "name": "BaseBdev1", 00:07:49.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.542 "is_configured": false, 00:07:49.542 "data_offset": 0, 00:07:49.542 "data_size": 0 00:07:49.542 }, 00:07:49.542 { 00:07:49.542 "name": "BaseBdev2", 00:07:49.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.543 "is_configured": false, 00:07:49.543 "data_offset": 0, 00:07:49.543 "data_size": 0 00:07:49.543 } 00:07:49.543 ] 00:07:49.543 }' 00:07:49.543 23:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.543 23:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.802 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.802 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.802 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.802 [2024-12-06 23:42:01.327592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.802 [2024-12-06 23:42:01.327699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:49.802 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.802 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.802 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.802 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.802 [2024-12-06 23:42:01.339553] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.802 [2024-12-06 23:42:01.339637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.802 [2024-12-06 23:42:01.339679] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.802 [2024-12-06 23:42:01.339712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.802 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.802 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:49.802 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.802 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.062 [2024-12-06 23:42:01.385943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.062 BaseBdev1 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.062 [ 00:07:50.062 { 00:07:50.062 "name": "BaseBdev1", 00:07:50.062 "aliases": [ 00:07:50.062 "d64abf3c-7c16-4cd0-8fa7-4fa12c6ecffc" 00:07:50.062 ], 00:07:50.062 "product_name": "Malloc disk", 00:07:50.062 "block_size": 512, 00:07:50.062 "num_blocks": 65536, 00:07:50.062 "uuid": "d64abf3c-7c16-4cd0-8fa7-4fa12c6ecffc", 00:07:50.062 "assigned_rate_limits": { 00:07:50.062 "rw_ios_per_sec": 0, 00:07:50.062 "rw_mbytes_per_sec": 0, 00:07:50.062 "r_mbytes_per_sec": 0, 00:07:50.062 "w_mbytes_per_sec": 0 00:07:50.062 }, 00:07:50.062 "claimed": true, 00:07:50.062 "claim_type": "exclusive_write", 00:07:50.062 "zoned": false, 00:07:50.062 "supported_io_types": { 00:07:50.062 "read": true, 00:07:50.062 "write": true, 00:07:50.062 "unmap": true, 00:07:50.062 "flush": true, 00:07:50.062 "reset": true, 00:07:50.062 "nvme_admin": false, 00:07:50.062 "nvme_io": false, 00:07:50.062 "nvme_io_md": false, 00:07:50.062 "write_zeroes": true, 00:07:50.062 "zcopy": true, 00:07:50.062 "get_zone_info": false, 00:07:50.062 "zone_management": false, 00:07:50.062 "zone_append": false, 00:07:50.062 "compare": false, 00:07:50.062 "compare_and_write": false, 00:07:50.062 "abort": true, 00:07:50.062 "seek_hole": false, 00:07:50.062 "seek_data": false, 00:07:50.062 "copy": true, 00:07:50.062 "nvme_iov_md": false 00:07:50.062 }, 00:07:50.062 "memory_domains": [ 00:07:50.062 { 00:07:50.062 "dma_device_id": "system", 00:07:50.062 "dma_device_type": 1 00:07:50.062 }, 00:07:50.062 { 00:07:50.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.062 "dma_device_type": 2 00:07:50.062 } 00:07:50.062 ], 00:07:50.062 "driver_specific": {} 00:07:50.062 } 00:07:50.062 ] 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.062 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.063 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.063 "name": "Existed_Raid", 00:07:50.063 "uuid": "a8352191-e9e7-467d-ae9c-dd94e5ce5517", 00:07:50.063 "strip_size_kb": 0, 00:07:50.063 "state": "configuring", 00:07:50.063 "raid_level": "raid1", 00:07:50.063 "superblock": true, 00:07:50.063 "num_base_bdevs": 2, 00:07:50.063 "num_base_bdevs_discovered": 1, 00:07:50.063 "num_base_bdevs_operational": 2, 00:07:50.063 "base_bdevs_list": [ 00:07:50.063 { 00:07:50.063 "name": "BaseBdev1", 00:07:50.063 "uuid": "d64abf3c-7c16-4cd0-8fa7-4fa12c6ecffc", 00:07:50.063 "is_configured": true, 00:07:50.063 "data_offset": 2048, 00:07:50.063 "data_size": 63488 00:07:50.063 }, 00:07:50.063 { 00:07:50.063 "name": "BaseBdev2", 00:07:50.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.063 "is_configured": false, 00:07:50.063 "data_offset": 0, 00:07:50.063 "data_size": 0 00:07:50.063 } 00:07:50.063 ] 00:07:50.063 }' 00:07:50.063 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.063 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.324 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.324 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.324 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.324 [2024-12-06 23:42:01.873195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.324 [2024-12-06 23:42:01.873256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:50.324 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.324 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.324 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.324 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.587 [2024-12-06 23:42:01.885205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.587 [2024-12-06 23:42:01.887082] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.587 [2024-12-06 23:42:01.887196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.587 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.588 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.588 "name": "Existed_Raid", 00:07:50.588 "uuid": "1914614a-cc49-49ea-8edd-478269c429f2", 00:07:50.588 "strip_size_kb": 0, 00:07:50.588 "state": "configuring", 00:07:50.588 "raid_level": "raid1", 00:07:50.588 "superblock": true, 00:07:50.588 "num_base_bdevs": 2, 00:07:50.588 "num_base_bdevs_discovered": 1, 00:07:50.588 "num_base_bdevs_operational": 2, 00:07:50.588 "base_bdevs_list": [ 00:07:50.588 { 00:07:50.588 "name": "BaseBdev1", 00:07:50.588 "uuid": "d64abf3c-7c16-4cd0-8fa7-4fa12c6ecffc", 00:07:50.588 "is_configured": true, 00:07:50.588 "data_offset": 2048, 00:07:50.588 "data_size": 63488 00:07:50.588 }, 00:07:50.588 { 00:07:50.588 "name": "BaseBdev2", 00:07:50.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.588 "is_configured": false, 00:07:50.588 "data_offset": 0, 00:07:50.588 "data_size": 0 00:07:50.588 } 00:07:50.588 ] 00:07:50.588 }' 00:07:50.588 23:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.588 23:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.848 [2024-12-06 23:42:02.366199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.848 [2024-12-06 23:42:02.366542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.848 [2024-12-06 23:42:02.366602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:50.848 [2024-12-06 23:42:02.367017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:50.848 BaseBdev2 00:07:50.848 [2024-12-06 23:42:02.367256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.848 [2024-12-06 23:42:02.367317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:50.848 [2024-12-06 23:42:02.367542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.848 [ 00:07:50.848 { 00:07:50.848 "name": "BaseBdev2", 00:07:50.848 "aliases": [ 00:07:50.848 "2df7ddcf-46f9-4e3e-a4fc-802c35ab0f87" 00:07:50.848 ], 00:07:50.848 "product_name": "Malloc disk", 00:07:50.848 "block_size": 512, 00:07:50.848 "num_blocks": 65536, 00:07:50.848 "uuid": "2df7ddcf-46f9-4e3e-a4fc-802c35ab0f87", 00:07:50.848 "assigned_rate_limits": { 00:07:50.848 "rw_ios_per_sec": 0, 00:07:50.848 "rw_mbytes_per_sec": 0, 00:07:50.848 "r_mbytes_per_sec": 0, 00:07:50.848 "w_mbytes_per_sec": 0 00:07:50.848 }, 00:07:50.848 "claimed": true, 00:07:50.848 "claim_type": "exclusive_write", 00:07:50.848 "zoned": false, 00:07:50.848 "supported_io_types": { 00:07:50.848 "read": true, 00:07:50.848 "write": true, 00:07:50.848 "unmap": true, 00:07:50.848 "flush": true, 00:07:50.848 "reset": true, 00:07:50.848 "nvme_admin": false, 00:07:50.848 "nvme_io": false, 00:07:50.848 "nvme_io_md": false, 00:07:50.848 "write_zeroes": true, 00:07:50.848 "zcopy": true, 00:07:50.848 "get_zone_info": false, 00:07:50.848 "zone_management": false, 00:07:50.848 "zone_append": false, 00:07:50.848 "compare": false, 00:07:50.848 "compare_and_write": false, 00:07:50.848 "abort": true, 00:07:50.848 "seek_hole": false, 00:07:50.848 "seek_data": false, 00:07:50.848 "copy": true, 00:07:50.848 "nvme_iov_md": false 00:07:50.848 }, 00:07:50.848 "memory_domains": [ 00:07:50.848 { 00:07:50.848 "dma_device_id": "system", 00:07:50.848 "dma_device_type": 1 00:07:50.848 }, 00:07:50.848 { 00:07:50.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.848 "dma_device_type": 2 00:07:50.848 } 00:07:50.848 ], 00:07:50.848 "driver_specific": {} 00:07:50.848 } 00:07:50.848 ] 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.848 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.108 "name": "Existed_Raid", 00:07:51.108 "uuid": "1914614a-cc49-49ea-8edd-478269c429f2", 00:07:51.108 "strip_size_kb": 0, 00:07:51.108 "state": "online", 00:07:51.108 "raid_level": "raid1", 00:07:51.108 "superblock": true, 00:07:51.108 "num_base_bdevs": 2, 00:07:51.108 "num_base_bdevs_discovered": 2, 00:07:51.108 "num_base_bdevs_operational": 2, 00:07:51.108 "base_bdevs_list": [ 00:07:51.108 { 00:07:51.108 "name": "BaseBdev1", 00:07:51.108 "uuid": "d64abf3c-7c16-4cd0-8fa7-4fa12c6ecffc", 00:07:51.108 "is_configured": true, 00:07:51.108 "data_offset": 2048, 00:07:51.108 "data_size": 63488 00:07:51.108 }, 00:07:51.108 { 00:07:51.108 "name": "BaseBdev2", 00:07:51.108 "uuid": "2df7ddcf-46f9-4e3e-a4fc-802c35ab0f87", 00:07:51.108 "is_configured": true, 00:07:51.108 "data_offset": 2048, 00:07:51.108 "data_size": 63488 00:07:51.108 } 00:07:51.108 ] 00:07:51.108 }' 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.108 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.368 [2024-12-06 23:42:02.873666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.368 "name": "Existed_Raid", 00:07:51.368 "aliases": [ 00:07:51.368 "1914614a-cc49-49ea-8edd-478269c429f2" 00:07:51.368 ], 00:07:51.368 "product_name": "Raid Volume", 00:07:51.368 "block_size": 512, 00:07:51.368 "num_blocks": 63488, 00:07:51.368 "uuid": "1914614a-cc49-49ea-8edd-478269c429f2", 00:07:51.368 "assigned_rate_limits": { 00:07:51.368 "rw_ios_per_sec": 0, 00:07:51.368 "rw_mbytes_per_sec": 0, 00:07:51.368 "r_mbytes_per_sec": 0, 00:07:51.368 "w_mbytes_per_sec": 0 00:07:51.368 }, 00:07:51.368 "claimed": false, 00:07:51.368 "zoned": false, 00:07:51.368 "supported_io_types": { 00:07:51.368 "read": true, 00:07:51.368 "write": true, 00:07:51.368 "unmap": false, 00:07:51.368 "flush": false, 00:07:51.368 "reset": true, 00:07:51.368 "nvme_admin": false, 00:07:51.368 "nvme_io": false, 00:07:51.368 "nvme_io_md": false, 00:07:51.368 "write_zeroes": true, 00:07:51.368 "zcopy": false, 00:07:51.368 "get_zone_info": false, 00:07:51.368 "zone_management": false, 00:07:51.368 "zone_append": false, 00:07:51.368 "compare": false, 00:07:51.368 "compare_and_write": false, 00:07:51.368 "abort": false, 00:07:51.368 "seek_hole": false, 00:07:51.368 "seek_data": false, 00:07:51.368 "copy": false, 00:07:51.368 "nvme_iov_md": false 00:07:51.368 }, 00:07:51.368 "memory_domains": [ 00:07:51.368 { 00:07:51.368 "dma_device_id": "system", 00:07:51.368 "dma_device_type": 1 00:07:51.368 }, 00:07:51.368 { 00:07:51.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.368 "dma_device_type": 2 00:07:51.368 }, 00:07:51.368 { 00:07:51.368 "dma_device_id": "system", 00:07:51.368 "dma_device_type": 1 00:07:51.368 }, 00:07:51.368 { 00:07:51.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.368 "dma_device_type": 2 00:07:51.368 } 00:07:51.368 ], 00:07:51.368 "driver_specific": { 00:07:51.368 "raid": { 00:07:51.368 "uuid": "1914614a-cc49-49ea-8edd-478269c429f2", 00:07:51.368 "strip_size_kb": 0, 00:07:51.368 "state": "online", 00:07:51.368 "raid_level": "raid1", 00:07:51.368 "superblock": true, 00:07:51.368 "num_base_bdevs": 2, 00:07:51.368 "num_base_bdevs_discovered": 2, 00:07:51.368 "num_base_bdevs_operational": 2, 00:07:51.368 "base_bdevs_list": [ 00:07:51.368 { 00:07:51.368 "name": "BaseBdev1", 00:07:51.368 "uuid": "d64abf3c-7c16-4cd0-8fa7-4fa12c6ecffc", 00:07:51.368 "is_configured": true, 00:07:51.368 "data_offset": 2048, 00:07:51.368 "data_size": 63488 00:07:51.368 }, 00:07:51.368 { 00:07:51.368 "name": "BaseBdev2", 00:07:51.368 "uuid": "2df7ddcf-46f9-4e3e-a4fc-802c35ab0f87", 00:07:51.368 "is_configured": true, 00:07:51.368 "data_offset": 2048, 00:07:51.368 "data_size": 63488 00:07:51.368 } 00:07:51.368 ] 00:07:51.368 } 00:07:51.368 } 00:07:51.368 }' 00:07:51.368 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.628 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:51.628 BaseBdev2' 00:07:51.628 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.628 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.628 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.628 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:51.628 23:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.628 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.628 23:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.628 [2024-12-06 23:42:03.093049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:51.628 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.887 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.887 "name": "Existed_Raid", 00:07:51.887 "uuid": "1914614a-cc49-49ea-8edd-478269c429f2", 00:07:51.887 "strip_size_kb": 0, 00:07:51.887 "state": "online", 00:07:51.887 "raid_level": "raid1", 00:07:51.887 "superblock": true, 00:07:51.887 "num_base_bdevs": 2, 00:07:51.887 "num_base_bdevs_discovered": 1, 00:07:51.887 "num_base_bdevs_operational": 1, 00:07:51.887 "base_bdevs_list": [ 00:07:51.887 { 00:07:51.887 "name": null, 00:07:51.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.887 "is_configured": false, 00:07:51.887 "data_offset": 0, 00:07:51.887 "data_size": 63488 00:07:51.887 }, 00:07:51.887 { 00:07:51.887 "name": "BaseBdev2", 00:07:51.887 "uuid": "2df7ddcf-46f9-4e3e-a4fc-802c35ab0f87", 00:07:51.887 "is_configured": true, 00:07:51.887 "data_offset": 2048, 00:07:51.887 "data_size": 63488 00:07:51.888 } 00:07:51.888 ] 00:07:51.888 }' 00:07:51.888 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.888 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.148 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:52.148 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.148 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.148 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.148 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:52.148 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.148 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.148 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:52.148 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:52.148 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:52.148 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.148 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.148 [2024-12-06 23:42:03.667555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:52.148 [2024-12-06 23:42:03.667740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.408 [2024-12-06 23:42:03.762390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.408 [2024-12-06 23:42:03.762538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.408 [2024-12-06 23:42:03.762581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62871 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62871 ']' 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62871 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62871 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62871' 00:07:52.408 killing process with pid 62871 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62871 00:07:52.408 [2024-12-06 23:42:03.860844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.408 23:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62871 00:07:52.408 [2024-12-06 23:42:03.877761] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.788 23:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:53.788 00:07:53.788 real 0m5.049s 00:07:53.788 user 0m7.317s 00:07:53.788 sys 0m0.815s 00:07:53.788 ************************************ 00:07:53.788 END TEST raid_state_function_test_sb 00:07:53.788 ************************************ 00:07:53.789 23:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.789 23:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.789 23:42:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:53.789 23:42:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:53.789 23:42:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.789 23:42:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.789 ************************************ 00:07:53.789 START TEST raid_superblock_test 00:07:53.789 ************************************ 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63123 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63123 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63123 ']' 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.789 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.789 [2024-12-06 23:42:05.127159] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:07:53.789 [2024-12-06 23:42:05.127366] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63123 ] 00:07:53.789 [2024-12-06 23:42:05.297796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.046 [2024-12-06 23:42:05.413066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.305 [2024-12-06 23:42:05.615161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.305 [2024-12-06 23:42:05.615256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.564 malloc1 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.564 23:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.564 [2024-12-06 23:42:06.002651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.564 [2024-12-06 23:42:06.002743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.565 [2024-12-06 23:42:06.002767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:54.565 [2024-12-06 23:42:06.002776] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.565 [2024-12-06 23:42:06.004769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.565 [2024-12-06 23:42:06.004805] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.565 pt1 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.565 malloc2 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.565 [2024-12-06 23:42:06.055829] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:54.565 [2024-12-06 23:42:06.055934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.565 [2024-12-06 23:42:06.055977] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:54.565 [2024-12-06 23:42:06.056007] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.565 [2024-12-06 23:42:06.058005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.565 [2024-12-06 23:42:06.058073] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:54.565 pt2 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.565 [2024-12-06 23:42:06.067858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.565 [2024-12-06 23:42:06.069627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:54.565 [2024-12-06 23:42:06.069856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:54.565 [2024-12-06 23:42:06.069906] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:54.565 [2024-12-06 23:42:06.070157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:54.565 [2024-12-06 23:42:06.070344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:54.565 [2024-12-06 23:42:06.070389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:54.565 [2024-12-06 23:42:06.070570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.565 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.823 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.823 "name": "raid_bdev1", 00:07:54.823 "uuid": "45081576-98b8-49bb-b8a6-a5c436cf74a5", 00:07:54.823 "strip_size_kb": 0, 00:07:54.823 "state": "online", 00:07:54.823 "raid_level": "raid1", 00:07:54.823 "superblock": true, 00:07:54.823 "num_base_bdevs": 2, 00:07:54.823 "num_base_bdevs_discovered": 2, 00:07:54.823 "num_base_bdevs_operational": 2, 00:07:54.823 "base_bdevs_list": [ 00:07:54.823 { 00:07:54.823 "name": "pt1", 00:07:54.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.823 "is_configured": true, 00:07:54.823 "data_offset": 2048, 00:07:54.823 "data_size": 63488 00:07:54.823 }, 00:07:54.823 { 00:07:54.823 "name": "pt2", 00:07:54.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.823 "is_configured": true, 00:07:54.823 "data_offset": 2048, 00:07:54.823 "data_size": 63488 00:07:54.823 } 00:07:54.823 ] 00:07:54.823 }' 00:07:54.823 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.823 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.082 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:55.082 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:55.082 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.082 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.082 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.082 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.082 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.082 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.082 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.082 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.082 [2024-12-06 23:42:06.547346] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.082 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.082 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.082 "name": "raid_bdev1", 00:07:55.082 "aliases": [ 00:07:55.082 "45081576-98b8-49bb-b8a6-a5c436cf74a5" 00:07:55.082 ], 00:07:55.082 "product_name": "Raid Volume", 00:07:55.082 "block_size": 512, 00:07:55.082 "num_blocks": 63488, 00:07:55.082 "uuid": "45081576-98b8-49bb-b8a6-a5c436cf74a5", 00:07:55.082 "assigned_rate_limits": { 00:07:55.082 "rw_ios_per_sec": 0, 00:07:55.082 "rw_mbytes_per_sec": 0, 00:07:55.082 "r_mbytes_per_sec": 0, 00:07:55.082 "w_mbytes_per_sec": 0 00:07:55.082 }, 00:07:55.082 "claimed": false, 00:07:55.082 "zoned": false, 00:07:55.082 "supported_io_types": { 00:07:55.082 "read": true, 00:07:55.082 "write": true, 00:07:55.082 "unmap": false, 00:07:55.082 "flush": false, 00:07:55.082 "reset": true, 00:07:55.082 "nvme_admin": false, 00:07:55.082 "nvme_io": false, 00:07:55.082 "nvme_io_md": false, 00:07:55.082 "write_zeroes": true, 00:07:55.082 "zcopy": false, 00:07:55.082 "get_zone_info": false, 00:07:55.082 "zone_management": false, 00:07:55.082 "zone_append": false, 00:07:55.082 "compare": false, 00:07:55.083 "compare_and_write": false, 00:07:55.083 "abort": false, 00:07:55.083 "seek_hole": false, 00:07:55.083 "seek_data": false, 00:07:55.083 "copy": false, 00:07:55.083 "nvme_iov_md": false 00:07:55.083 }, 00:07:55.083 "memory_domains": [ 00:07:55.083 { 00:07:55.083 "dma_device_id": "system", 00:07:55.083 "dma_device_type": 1 00:07:55.083 }, 00:07:55.083 { 00:07:55.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.083 "dma_device_type": 2 00:07:55.083 }, 00:07:55.083 { 00:07:55.083 "dma_device_id": "system", 00:07:55.083 "dma_device_type": 1 00:07:55.083 }, 00:07:55.083 { 00:07:55.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.083 "dma_device_type": 2 00:07:55.083 } 00:07:55.083 ], 00:07:55.083 "driver_specific": { 00:07:55.083 "raid": { 00:07:55.083 "uuid": "45081576-98b8-49bb-b8a6-a5c436cf74a5", 00:07:55.083 "strip_size_kb": 0, 00:07:55.083 "state": "online", 00:07:55.083 "raid_level": "raid1", 00:07:55.083 "superblock": true, 00:07:55.083 "num_base_bdevs": 2, 00:07:55.083 "num_base_bdevs_discovered": 2, 00:07:55.083 "num_base_bdevs_operational": 2, 00:07:55.083 "base_bdevs_list": [ 00:07:55.083 { 00:07:55.083 "name": "pt1", 00:07:55.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.083 "is_configured": true, 00:07:55.083 "data_offset": 2048, 00:07:55.083 "data_size": 63488 00:07:55.083 }, 00:07:55.083 { 00:07:55.083 "name": "pt2", 00:07:55.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.083 "is_configured": true, 00:07:55.083 "data_offset": 2048, 00:07:55.083 "data_size": 63488 00:07:55.083 } 00:07:55.083 ] 00:07:55.083 } 00:07:55.083 } 00:07:55.083 }' 00:07:55.083 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.083 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:55.083 pt2' 00:07:55.083 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.349 [2024-12-06 23:42:06.799056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=45081576-98b8-49bb-b8a6-a5c436cf74a5 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 45081576-98b8-49bb-b8a6-a5c436cf74a5 ']' 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.349 [2024-12-06 23:42:06.842761] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.349 [2024-12-06 23:42:06.842785] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.349 [2024-12-06 23:42:06.842866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.349 [2024-12-06 23:42:06.842924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.349 [2024-12-06 23:42:06.842936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:55.349 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.612 [2024-12-06 23:42:06.978798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:55.612 [2024-12-06 23:42:06.980809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:55.612 [2024-12-06 23:42:06.980873] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:55.612 [2024-12-06 23:42:06.980936] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:55.612 [2024-12-06 23:42:06.980949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.612 [2024-12-06 23:42:06.980960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:55.612 request: 00:07:55.612 { 00:07:55.612 "name": "raid_bdev1", 00:07:55.612 "raid_level": "raid1", 00:07:55.612 "base_bdevs": [ 00:07:55.612 "malloc1", 00:07:55.612 "malloc2" 00:07:55.612 ], 00:07:55.612 "superblock": false, 00:07:55.612 "method": "bdev_raid_create", 00:07:55.612 "req_id": 1 00:07:55.612 } 00:07:55.612 Got JSON-RPC error response 00:07:55.612 response: 00:07:55.612 { 00:07:55.612 "code": -17, 00:07:55.612 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:55.612 } 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.612 23:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.612 [2024-12-06 23:42:07.038845] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:55.612 [2024-12-06 23:42:07.038918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.612 [2024-12-06 23:42:07.038940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:55.612 [2024-12-06 23:42:07.038952] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.612 [2024-12-06 23:42:07.041296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.612 [2024-12-06 23:42:07.041406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:55.612 [2024-12-06 23:42:07.041504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:55.612 [2024-12-06 23:42:07.041569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:55.612 pt1 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.612 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.612 "name": "raid_bdev1", 00:07:55.612 "uuid": "45081576-98b8-49bb-b8a6-a5c436cf74a5", 00:07:55.612 "strip_size_kb": 0, 00:07:55.612 "state": "configuring", 00:07:55.612 "raid_level": "raid1", 00:07:55.612 "superblock": true, 00:07:55.612 "num_base_bdevs": 2, 00:07:55.612 "num_base_bdevs_discovered": 1, 00:07:55.612 "num_base_bdevs_operational": 2, 00:07:55.612 "base_bdevs_list": [ 00:07:55.612 { 00:07:55.612 "name": "pt1", 00:07:55.612 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.612 "is_configured": true, 00:07:55.612 "data_offset": 2048, 00:07:55.612 "data_size": 63488 00:07:55.612 }, 00:07:55.612 { 00:07:55.612 "name": null, 00:07:55.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.613 "is_configured": false, 00:07:55.613 "data_offset": 2048, 00:07:55.613 "data_size": 63488 00:07:55.613 } 00:07:55.613 ] 00:07:55.613 }' 00:07:55.613 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.613 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.182 [2024-12-06 23:42:07.494867] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.182 [2024-12-06 23:42:07.495007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.182 [2024-12-06 23:42:07.495054] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:56.182 [2024-12-06 23:42:07.495091] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.182 [2024-12-06 23:42:07.495610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.182 [2024-12-06 23:42:07.495696] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.182 [2024-12-06 23:42:07.495824] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:56.182 [2024-12-06 23:42:07.495887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.182 [2024-12-06 23:42:07.496045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:56.182 [2024-12-06 23:42:07.496089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:56.182 [2024-12-06 23:42:07.496374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:56.182 [2024-12-06 23:42:07.496578] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:56.182 [2024-12-06 23:42:07.496620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:56.182 [2024-12-06 23:42:07.496830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.182 pt2 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.182 "name": "raid_bdev1", 00:07:56.182 "uuid": "45081576-98b8-49bb-b8a6-a5c436cf74a5", 00:07:56.182 "strip_size_kb": 0, 00:07:56.182 "state": "online", 00:07:56.182 "raid_level": "raid1", 00:07:56.182 "superblock": true, 00:07:56.182 "num_base_bdevs": 2, 00:07:56.182 "num_base_bdevs_discovered": 2, 00:07:56.182 "num_base_bdevs_operational": 2, 00:07:56.182 "base_bdevs_list": [ 00:07:56.182 { 00:07:56.182 "name": "pt1", 00:07:56.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.182 "is_configured": true, 00:07:56.182 "data_offset": 2048, 00:07:56.182 "data_size": 63488 00:07:56.182 }, 00:07:56.182 { 00:07:56.182 "name": "pt2", 00:07:56.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.182 "is_configured": true, 00:07:56.182 "data_offset": 2048, 00:07:56.182 "data_size": 63488 00:07:56.182 } 00:07:56.182 ] 00:07:56.182 }' 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.182 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.442 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.442 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.442 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.442 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.442 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.442 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.442 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.442 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.442 23:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.442 23:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.442 [2024-12-06 23:42:07.991040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.442 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.702 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.702 "name": "raid_bdev1", 00:07:56.702 "aliases": [ 00:07:56.702 "45081576-98b8-49bb-b8a6-a5c436cf74a5" 00:07:56.702 ], 00:07:56.702 "product_name": "Raid Volume", 00:07:56.702 "block_size": 512, 00:07:56.702 "num_blocks": 63488, 00:07:56.702 "uuid": "45081576-98b8-49bb-b8a6-a5c436cf74a5", 00:07:56.702 "assigned_rate_limits": { 00:07:56.702 "rw_ios_per_sec": 0, 00:07:56.702 "rw_mbytes_per_sec": 0, 00:07:56.702 "r_mbytes_per_sec": 0, 00:07:56.702 "w_mbytes_per_sec": 0 00:07:56.702 }, 00:07:56.702 "claimed": false, 00:07:56.702 "zoned": false, 00:07:56.702 "supported_io_types": { 00:07:56.702 "read": true, 00:07:56.702 "write": true, 00:07:56.702 "unmap": false, 00:07:56.702 "flush": false, 00:07:56.702 "reset": true, 00:07:56.702 "nvme_admin": false, 00:07:56.702 "nvme_io": false, 00:07:56.702 "nvme_io_md": false, 00:07:56.702 "write_zeroes": true, 00:07:56.702 "zcopy": false, 00:07:56.702 "get_zone_info": false, 00:07:56.702 "zone_management": false, 00:07:56.702 "zone_append": false, 00:07:56.702 "compare": false, 00:07:56.702 "compare_and_write": false, 00:07:56.702 "abort": false, 00:07:56.702 "seek_hole": false, 00:07:56.702 "seek_data": false, 00:07:56.702 "copy": false, 00:07:56.702 "nvme_iov_md": false 00:07:56.702 }, 00:07:56.702 "memory_domains": [ 00:07:56.702 { 00:07:56.702 "dma_device_id": "system", 00:07:56.702 "dma_device_type": 1 00:07:56.702 }, 00:07:56.702 { 00:07:56.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.702 "dma_device_type": 2 00:07:56.702 }, 00:07:56.702 { 00:07:56.702 "dma_device_id": "system", 00:07:56.702 "dma_device_type": 1 00:07:56.702 }, 00:07:56.702 { 00:07:56.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.702 "dma_device_type": 2 00:07:56.702 } 00:07:56.702 ], 00:07:56.702 "driver_specific": { 00:07:56.702 "raid": { 00:07:56.702 "uuid": "45081576-98b8-49bb-b8a6-a5c436cf74a5", 00:07:56.702 "strip_size_kb": 0, 00:07:56.702 "state": "online", 00:07:56.702 "raid_level": "raid1", 00:07:56.702 "superblock": true, 00:07:56.702 "num_base_bdevs": 2, 00:07:56.702 "num_base_bdevs_discovered": 2, 00:07:56.702 "num_base_bdevs_operational": 2, 00:07:56.702 "base_bdevs_list": [ 00:07:56.702 { 00:07:56.702 "name": "pt1", 00:07:56.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.702 "is_configured": true, 00:07:56.702 "data_offset": 2048, 00:07:56.702 "data_size": 63488 00:07:56.702 }, 00:07:56.702 { 00:07:56.702 "name": "pt2", 00:07:56.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.702 "is_configured": true, 00:07:56.702 "data_offset": 2048, 00:07:56.702 "data_size": 63488 00:07:56.702 } 00:07:56.702 ] 00:07:56.702 } 00:07:56.702 } 00:07:56.703 }' 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:56.703 pt2' 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.703 [2024-12-06 23:42:08.235015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.703 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 45081576-98b8-49bb-b8a6-a5c436cf74a5 '!=' 45081576-98b8-49bb-b8a6-a5c436cf74a5 ']' 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.962 [2024-12-06 23:42:08.282865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.962 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.963 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.963 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.963 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.963 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.963 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.963 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.963 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.963 "name": "raid_bdev1", 00:07:56.963 "uuid": "45081576-98b8-49bb-b8a6-a5c436cf74a5", 00:07:56.963 "strip_size_kb": 0, 00:07:56.963 "state": "online", 00:07:56.963 "raid_level": "raid1", 00:07:56.963 "superblock": true, 00:07:56.963 "num_base_bdevs": 2, 00:07:56.963 "num_base_bdevs_discovered": 1, 00:07:56.963 "num_base_bdevs_operational": 1, 00:07:56.963 "base_bdevs_list": [ 00:07:56.963 { 00:07:56.963 "name": null, 00:07:56.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.963 "is_configured": false, 00:07:56.963 "data_offset": 0, 00:07:56.963 "data_size": 63488 00:07:56.963 }, 00:07:56.963 { 00:07:56.963 "name": "pt2", 00:07:56.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.963 "is_configured": true, 00:07:56.963 "data_offset": 2048, 00:07:56.963 "data_size": 63488 00:07:56.963 } 00:07:56.963 ] 00:07:56.963 }' 00:07:56.963 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.963 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.223 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.223 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.223 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.223 [2024-12-06 23:42:08.754834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.223 [2024-12-06 23:42:08.754929] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.223 [2024-12-06 23:42:08.755030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.223 [2024-12-06 23:42:08.755111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.223 [2024-12-06 23:42:08.755158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:57.223 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.223 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:57.223 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.223 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.223 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.223 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.484 [2024-12-06 23:42:08.810806] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:57.484 [2024-12-06 23:42:08.810917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.484 [2024-12-06 23:42:08.810954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:57.484 [2024-12-06 23:42:08.810983] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.484 [2024-12-06 23:42:08.813231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.484 [2024-12-06 23:42:08.813309] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:57.484 [2024-12-06 23:42:08.813420] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:57.484 [2024-12-06 23:42:08.813484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.484 [2024-12-06 23:42:08.813606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:57.484 [2024-12-06 23:42:08.813649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:57.484 [2024-12-06 23:42:08.813924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:57.484 [2024-12-06 23:42:08.814110] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:57.484 [2024-12-06 23:42:08.814152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:57.484 [2024-12-06 23:42:08.814332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.484 pt2 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.484 "name": "raid_bdev1", 00:07:57.484 "uuid": "45081576-98b8-49bb-b8a6-a5c436cf74a5", 00:07:57.484 "strip_size_kb": 0, 00:07:57.484 "state": "online", 00:07:57.484 "raid_level": "raid1", 00:07:57.484 "superblock": true, 00:07:57.484 "num_base_bdevs": 2, 00:07:57.484 "num_base_bdevs_discovered": 1, 00:07:57.484 "num_base_bdevs_operational": 1, 00:07:57.484 "base_bdevs_list": [ 00:07:57.484 { 00:07:57.484 "name": null, 00:07:57.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.484 "is_configured": false, 00:07:57.484 "data_offset": 2048, 00:07:57.484 "data_size": 63488 00:07:57.484 }, 00:07:57.484 { 00:07:57.484 "name": "pt2", 00:07:57.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.484 "is_configured": true, 00:07:57.484 "data_offset": 2048, 00:07:57.484 "data_size": 63488 00:07:57.484 } 00:07:57.484 ] 00:07:57.484 }' 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.484 23:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.745 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.745 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.745 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.745 [2024-12-06 23:42:09.290798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.745 [2024-12-06 23:42:09.290899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.745 [2024-12-06 23:42:09.290977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.745 [2024-12-06 23:42:09.291029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.745 [2024-12-06 23:42:09.291038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:57.745 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.745 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:57.745 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.745 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.745 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.006 [2024-12-06 23:42:09.354795] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:58.006 [2024-12-06 23:42:09.354912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.006 [2024-12-06 23:42:09.354964] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:58.006 [2024-12-06 23:42:09.354995] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.006 [2024-12-06 23:42:09.357293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.006 [2024-12-06 23:42:09.357366] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:58.006 [2024-12-06 23:42:09.357474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:58.006 [2024-12-06 23:42:09.357544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:58.006 [2024-12-06 23:42:09.357729] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:58.006 [2024-12-06 23:42:09.357786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.006 [2024-12-06 23:42:09.357807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:58.006 [2024-12-06 23:42:09.357867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.006 [2024-12-06 23:42:09.357945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:58.006 [2024-12-06 23:42:09.357954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:58.006 [2024-12-06 23:42:09.358201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:58.006 [2024-12-06 23:42:09.358359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:58.006 [2024-12-06 23:42:09.358371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:58.006 [2024-12-06 23:42:09.358511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.006 pt1 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.006 "name": "raid_bdev1", 00:07:58.006 "uuid": "45081576-98b8-49bb-b8a6-a5c436cf74a5", 00:07:58.006 "strip_size_kb": 0, 00:07:58.006 "state": "online", 00:07:58.006 "raid_level": "raid1", 00:07:58.006 "superblock": true, 00:07:58.006 "num_base_bdevs": 2, 00:07:58.006 "num_base_bdevs_discovered": 1, 00:07:58.006 "num_base_bdevs_operational": 1, 00:07:58.006 "base_bdevs_list": [ 00:07:58.006 { 00:07:58.006 "name": null, 00:07:58.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.006 "is_configured": false, 00:07:58.006 "data_offset": 2048, 00:07:58.006 "data_size": 63488 00:07:58.006 }, 00:07:58.006 { 00:07:58.006 "name": "pt2", 00:07:58.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.006 "is_configured": true, 00:07:58.006 "data_offset": 2048, 00:07:58.006 "data_size": 63488 00:07:58.006 } 00:07:58.006 ] 00:07:58.006 }' 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.006 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.329 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:58.329 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:58.329 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.329 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.329 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.329 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:58.329 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.329 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:58.329 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.329 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.329 [2024-12-06 23:42:09.867036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 45081576-98b8-49bb-b8a6-a5c436cf74a5 '!=' 45081576-98b8-49bb-b8a6-a5c436cf74a5 ']' 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63123 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63123 ']' 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63123 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63123 00:07:58.587 killing process with pid 63123 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63123' 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63123 00:07:58.587 [2024-12-06 23:42:09.949642] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.587 [2024-12-06 23:42:09.949750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.587 [2024-12-06 23:42:09.949798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.587 [2024-12-06 23:42:09.949813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:58.587 23:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63123 00:07:58.844 [2024-12-06 23:42:10.149158] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.780 23:42:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:59.780 00:07:59.780 real 0m6.188s 00:07:59.780 user 0m9.465s 00:07:59.780 sys 0m1.087s 00:07:59.780 23:42:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.780 23:42:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.780 ************************************ 00:07:59.780 END TEST raid_superblock_test 00:07:59.780 ************************************ 00:07:59.780 23:42:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:59.780 23:42:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:59.780 23:42:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.780 23:42:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.780 ************************************ 00:07:59.780 START TEST raid_read_error_test 00:07:59.780 ************************************ 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eMaXF5ZCBT 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63453 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63453 00:07:59.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63453 ']' 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.780 23:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.039 [2024-12-06 23:42:11.396045] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:08:00.039 [2024-12-06 23:42:11.396161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63453 ] 00:08:00.039 [2024-12-06 23:42:11.568460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.297 [2024-12-06 23:42:11.681293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.555 [2024-12-06 23:42:11.875371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.555 [2024-12-06 23:42:11.875510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.815 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.815 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:00.815 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.815 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:00.815 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.815 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.815 BaseBdev1_malloc 00:08:00.815 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.815 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:00.815 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.815 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.815 true 00:08:00.815 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.816 [2024-12-06 23:42:12.278975] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:00.816 [2024-12-06 23:42:12.279071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.816 [2024-12-06 23:42:12.279094] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:00.816 [2024-12-06 23:42:12.279105] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.816 [2024-12-06 23:42:12.281250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.816 [2024-12-06 23:42:12.281289] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:00.816 BaseBdev1 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.816 BaseBdev2_malloc 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.816 true 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.816 [2024-12-06 23:42:12.346987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:00.816 [2024-12-06 23:42:12.347038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.816 [2024-12-06 23:42:12.347053] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:00.816 [2024-12-06 23:42:12.347063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.816 [2024-12-06 23:42:12.349110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.816 [2024-12-06 23:42:12.349148] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:00.816 BaseBdev2 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.816 [2024-12-06 23:42:12.359023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.816 [2024-12-06 23:42:12.360756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.816 [2024-12-06 23:42:12.360977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:00.816 [2024-12-06 23:42:12.361023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:00.816 [2024-12-06 23:42:12.361259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:00.816 [2024-12-06 23:42:12.361460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:00.816 [2024-12-06 23:42:12.361503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:00.816 [2024-12-06 23:42:12.361723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.816 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.076 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.076 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.076 "name": "raid_bdev1", 00:08:01.076 "uuid": "f0b7829d-c89b-4dab-95be-4c79ed17ecf5", 00:08:01.076 "strip_size_kb": 0, 00:08:01.076 "state": "online", 00:08:01.076 "raid_level": "raid1", 00:08:01.076 "superblock": true, 00:08:01.076 "num_base_bdevs": 2, 00:08:01.076 "num_base_bdevs_discovered": 2, 00:08:01.076 "num_base_bdevs_operational": 2, 00:08:01.076 "base_bdevs_list": [ 00:08:01.076 { 00:08:01.076 "name": "BaseBdev1", 00:08:01.076 "uuid": "b0599e1e-6680-5fb7-a35c-8d22549ed37d", 00:08:01.076 "is_configured": true, 00:08:01.076 "data_offset": 2048, 00:08:01.076 "data_size": 63488 00:08:01.076 }, 00:08:01.076 { 00:08:01.076 "name": "BaseBdev2", 00:08:01.076 "uuid": "924d4ce7-f0af-56e8-a2cc-3091b8975896", 00:08:01.076 "is_configured": true, 00:08:01.076 "data_offset": 2048, 00:08:01.076 "data_size": 63488 00:08:01.076 } 00:08:01.076 ] 00:08:01.076 }' 00:08:01.076 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.076 23:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.336 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:01.336 23:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:01.596 [2024-12-06 23:42:12.920292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:02.533 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:02.533 23:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.533 23:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.533 23:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.533 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:02.533 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:02.533 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.534 "name": "raid_bdev1", 00:08:02.534 "uuid": "f0b7829d-c89b-4dab-95be-4c79ed17ecf5", 00:08:02.534 "strip_size_kb": 0, 00:08:02.534 "state": "online", 00:08:02.534 "raid_level": "raid1", 00:08:02.534 "superblock": true, 00:08:02.534 "num_base_bdevs": 2, 00:08:02.534 "num_base_bdevs_discovered": 2, 00:08:02.534 "num_base_bdevs_operational": 2, 00:08:02.534 "base_bdevs_list": [ 00:08:02.534 { 00:08:02.534 "name": "BaseBdev1", 00:08:02.534 "uuid": "b0599e1e-6680-5fb7-a35c-8d22549ed37d", 00:08:02.534 "is_configured": true, 00:08:02.534 "data_offset": 2048, 00:08:02.534 "data_size": 63488 00:08:02.534 }, 00:08:02.534 { 00:08:02.534 "name": "BaseBdev2", 00:08:02.534 "uuid": "924d4ce7-f0af-56e8-a2cc-3091b8975896", 00:08:02.534 "is_configured": true, 00:08:02.534 "data_offset": 2048, 00:08:02.534 "data_size": 63488 00:08:02.534 } 00:08:02.534 ] 00:08:02.534 }' 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.534 23:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.794 [2024-12-06 23:42:14.253715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.794 [2024-12-06 23:42:14.253801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.794 [2024-12-06 23:42:14.256688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.794 [2024-12-06 23:42:14.256775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.794 [2024-12-06 23:42:14.256874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.794 [2024-12-06 23:42:14.256941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:02.794 { 00:08:02.794 "results": [ 00:08:02.794 { 00:08:02.794 "job": "raid_bdev1", 00:08:02.794 "core_mask": "0x1", 00:08:02.794 "workload": "randrw", 00:08:02.794 "percentage": 50, 00:08:02.794 "status": "finished", 00:08:02.794 "queue_depth": 1, 00:08:02.794 "io_size": 131072, 00:08:02.794 "runtime": 1.334381, 00:08:02.794 "iops": 18016.59346168748, 00:08:02.794 "mibps": 2252.074182710935, 00:08:02.794 "io_failed": 0, 00:08:02.794 "io_timeout": 0, 00:08:02.794 "avg_latency_us": 52.85026725631922, 00:08:02.794 "min_latency_us": 24.258515283842794, 00:08:02.794 "max_latency_us": 1488.1537117903931 00:08:02.794 } 00:08:02.794 ], 00:08:02.794 "core_count": 1 00:08:02.794 } 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63453 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63453 ']' 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63453 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63453 00:08:02.794 killing process with pid 63453 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63453' 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63453 00:08:02.794 [2024-12-06 23:42:14.301341] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.794 23:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63453 00:08:03.055 [2024-12-06 23:42:14.436130] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.434 23:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eMaXF5ZCBT 00:08:04.435 23:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:04.435 23:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:04.435 23:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:04.435 23:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:04.435 ************************************ 00:08:04.435 END TEST raid_read_error_test 00:08:04.435 ************************************ 00:08:04.435 23:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.435 23:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:04.435 23:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:04.435 00:08:04.435 real 0m4.326s 00:08:04.435 user 0m5.174s 00:08:04.435 sys 0m0.544s 00:08:04.435 23:42:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.435 23:42:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.435 23:42:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:04.435 23:42:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:04.435 23:42:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.435 23:42:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.435 ************************************ 00:08:04.435 START TEST raid_write_error_test 00:08:04.435 ************************************ 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eNOqiBMzTp 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63599 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63599 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63599 ']' 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.435 23:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.435 [2024-12-06 23:42:15.789506] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:08:04.435 [2024-12-06 23:42:15.789729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63599 ] 00:08:04.435 [2024-12-06 23:42:15.944690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.694 [2024-12-06 23:42:16.059812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.953 [2024-12-06 23:42:16.258676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.953 [2024-12-06 23:42:16.258730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.213 BaseBdev1_malloc 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.213 true 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.213 [2024-12-06 23:42:16.673544] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:05.213 [2024-12-06 23:42:16.673614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.213 [2024-12-06 23:42:16.673635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:05.213 [2024-12-06 23:42:16.673646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.213 [2024-12-06 23:42:16.675858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.213 [2024-12-06 23:42:16.675899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:05.213 BaseBdev1 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.213 BaseBdev2_malloc 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.213 true 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.213 [2024-12-06 23:42:16.739210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:05.213 [2024-12-06 23:42:16.739266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.213 [2024-12-06 23:42:16.739281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:05.213 [2024-12-06 23:42:16.739291] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.213 [2024-12-06 23:42:16.741391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.213 [2024-12-06 23:42:16.741431] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:05.213 BaseBdev2 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.213 [2024-12-06 23:42:16.751243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.213 [2024-12-06 23:42:16.753041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.213 [2024-12-06 23:42:16.753223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:05.213 [2024-12-06 23:42:16.753238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:05.213 [2024-12-06 23:42:16.753457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:05.213 [2024-12-06 23:42:16.753653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:05.213 [2024-12-06 23:42:16.753682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:05.213 [2024-12-06 23:42:16.753826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.213 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.478 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.478 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.478 "name": "raid_bdev1", 00:08:05.478 "uuid": "d4b2453d-74f4-406d-8dd1-a0054fdbfbc6", 00:08:05.478 "strip_size_kb": 0, 00:08:05.478 "state": "online", 00:08:05.478 "raid_level": "raid1", 00:08:05.478 "superblock": true, 00:08:05.478 "num_base_bdevs": 2, 00:08:05.478 "num_base_bdevs_discovered": 2, 00:08:05.478 "num_base_bdevs_operational": 2, 00:08:05.478 "base_bdevs_list": [ 00:08:05.478 { 00:08:05.478 "name": "BaseBdev1", 00:08:05.478 "uuid": "fa608862-8211-542e-ad48-71302ea1d758", 00:08:05.478 "is_configured": true, 00:08:05.478 "data_offset": 2048, 00:08:05.478 "data_size": 63488 00:08:05.478 }, 00:08:05.478 { 00:08:05.478 "name": "BaseBdev2", 00:08:05.478 "uuid": "f084e51b-3f69-5767-af96-7c835c1976cf", 00:08:05.478 "is_configured": true, 00:08:05.478 "data_offset": 2048, 00:08:05.478 "data_size": 63488 00:08:05.478 } 00:08:05.478 ] 00:08:05.478 }' 00:08:05.478 23:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.478 23:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.744 23:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:05.744 23:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:05.744 [2024-12-06 23:42:17.288081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.680 [2024-12-06 23:42:18.204158] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:06.680 [2024-12-06 23:42:18.204223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.680 [2024-12-06 23:42:18.204419] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.680 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.937 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.937 "name": "raid_bdev1", 00:08:06.937 "uuid": "d4b2453d-74f4-406d-8dd1-a0054fdbfbc6", 00:08:06.937 "strip_size_kb": 0, 00:08:06.937 "state": "online", 00:08:06.937 "raid_level": "raid1", 00:08:06.937 "superblock": true, 00:08:06.937 "num_base_bdevs": 2, 00:08:06.937 "num_base_bdevs_discovered": 1, 00:08:06.937 "num_base_bdevs_operational": 1, 00:08:06.937 "base_bdevs_list": [ 00:08:06.937 { 00:08:06.937 "name": null, 00:08:06.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.937 "is_configured": false, 00:08:06.937 "data_offset": 0, 00:08:06.937 "data_size": 63488 00:08:06.937 }, 00:08:06.937 { 00:08:06.937 "name": "BaseBdev2", 00:08:06.937 "uuid": "f084e51b-3f69-5767-af96-7c835c1976cf", 00:08:06.937 "is_configured": true, 00:08:06.937 "data_offset": 2048, 00:08:06.937 "data_size": 63488 00:08:06.937 } 00:08:06.937 ] 00:08:06.937 }' 00:08:06.937 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.937 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.196 [2024-12-06 23:42:18.629014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:07.196 [2024-12-06 23:42:18.629052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.196 [2024-12-06 23:42:18.631745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.196 [2024-12-06 23:42:18.631790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.196 [2024-12-06 23:42:18.631860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.196 [2024-12-06 23:42:18.631872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:07.196 { 00:08:07.196 "results": [ 00:08:07.196 { 00:08:07.196 "job": "raid_bdev1", 00:08:07.196 "core_mask": "0x1", 00:08:07.196 "workload": "randrw", 00:08:07.196 "percentage": 50, 00:08:07.196 "status": "finished", 00:08:07.196 "queue_depth": 1, 00:08:07.196 "io_size": 131072, 00:08:07.196 "runtime": 1.341791, 00:08:07.196 "iops": 21265.60693878555, 00:08:07.196 "mibps": 2658.2008673481937, 00:08:07.196 "io_failed": 0, 00:08:07.196 "io_timeout": 0, 00:08:07.196 "avg_latency_us": 44.39474317469423, 00:08:07.196 "min_latency_us": 23.36419213973799, 00:08:07.196 "max_latency_us": 1480.9991266375546 00:08:07.196 } 00:08:07.196 ], 00:08:07.196 "core_count": 1 00:08:07.196 } 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63599 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63599 ']' 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63599 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63599 00:08:07.196 killing process with pid 63599 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63599' 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63599 00:08:07.196 [2024-12-06 23:42:18.676152] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.196 23:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63599 00:08:07.456 [2024-12-06 23:42:18.810101] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.835 23:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eNOqiBMzTp 00:08:08.835 23:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:08.835 23:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:08.835 23:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:08.835 23:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:08.835 23:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.835 23:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:08.835 23:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:08.835 00:08:08.835 real 0m4.293s 00:08:08.835 user 0m5.126s 00:08:08.835 sys 0m0.541s 00:08:08.835 23:42:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.835 23:42:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.835 ************************************ 00:08:08.835 END TEST raid_write_error_test 00:08:08.835 ************************************ 00:08:08.835 23:42:20 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:08.835 23:42:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:08.835 23:42:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:08.835 23:42:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:08.835 23:42:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.835 23:42:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.835 ************************************ 00:08:08.835 START TEST raid_state_function_test 00:08:08.835 ************************************ 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63737 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:08.835 Process raid pid: 63737 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63737' 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63737 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63737 ']' 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.835 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.836 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.836 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.836 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.836 [2024-12-06 23:42:20.140745] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:08:08.836 [2024-12-06 23:42:20.140874] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.836 [2024-12-06 23:42:20.315521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.096 [2024-12-06 23:42:20.430235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.096 [2024-12-06 23:42:20.619157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.096 [2024-12-06 23:42:20.619198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.666 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.666 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:09.666 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.666 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.666 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.666 [2024-12-06 23:42:20.972690] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.666 [2024-12-06 23:42:20.972742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.666 [2024-12-06 23:42:20.972752] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.666 [2024-12-06 23:42:20.972761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.666 [2024-12-06 23:42:20.972767] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.666 [2024-12-06 23:42:20.972776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.666 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.666 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.666 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.666 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.667 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.667 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.667 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.667 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.667 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.667 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.667 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.667 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.667 23:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.667 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.667 23:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.667 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.667 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.667 "name": "Existed_Raid", 00:08:09.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.667 "strip_size_kb": 64, 00:08:09.667 "state": "configuring", 00:08:09.667 "raid_level": "raid0", 00:08:09.667 "superblock": false, 00:08:09.667 "num_base_bdevs": 3, 00:08:09.667 "num_base_bdevs_discovered": 0, 00:08:09.667 "num_base_bdevs_operational": 3, 00:08:09.667 "base_bdevs_list": [ 00:08:09.667 { 00:08:09.667 "name": "BaseBdev1", 00:08:09.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.667 "is_configured": false, 00:08:09.667 "data_offset": 0, 00:08:09.667 "data_size": 0 00:08:09.667 }, 00:08:09.667 { 00:08:09.667 "name": "BaseBdev2", 00:08:09.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.667 "is_configured": false, 00:08:09.667 "data_offset": 0, 00:08:09.667 "data_size": 0 00:08:09.667 }, 00:08:09.667 { 00:08:09.667 "name": "BaseBdev3", 00:08:09.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.667 "is_configured": false, 00:08:09.667 "data_offset": 0, 00:08:09.667 "data_size": 0 00:08:09.667 } 00:08:09.667 ] 00:08:09.667 }' 00:08:09.667 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.667 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.927 [2024-12-06 23:42:21.419855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.927 [2024-12-06 23:42:21.419898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.927 [2024-12-06 23:42:21.427834] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.927 [2024-12-06 23:42:21.427875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.927 [2024-12-06 23:42:21.427884] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.927 [2024-12-06 23:42:21.427893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.927 [2024-12-06 23:42:21.427900] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.927 [2024-12-06 23:42:21.427908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.927 [2024-12-06 23:42:21.470825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.927 BaseBdev1 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.927 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.186 [ 00:08:10.186 { 00:08:10.186 "name": "BaseBdev1", 00:08:10.186 "aliases": [ 00:08:10.186 "e080e608-4fcf-4fdd-950c-585001f6a666" 00:08:10.186 ], 00:08:10.186 "product_name": "Malloc disk", 00:08:10.186 "block_size": 512, 00:08:10.186 "num_blocks": 65536, 00:08:10.186 "uuid": "e080e608-4fcf-4fdd-950c-585001f6a666", 00:08:10.186 "assigned_rate_limits": { 00:08:10.186 "rw_ios_per_sec": 0, 00:08:10.186 "rw_mbytes_per_sec": 0, 00:08:10.186 "r_mbytes_per_sec": 0, 00:08:10.186 "w_mbytes_per_sec": 0 00:08:10.186 }, 00:08:10.186 "claimed": true, 00:08:10.186 "claim_type": "exclusive_write", 00:08:10.186 "zoned": false, 00:08:10.186 "supported_io_types": { 00:08:10.186 "read": true, 00:08:10.186 "write": true, 00:08:10.186 "unmap": true, 00:08:10.186 "flush": true, 00:08:10.186 "reset": true, 00:08:10.186 "nvme_admin": false, 00:08:10.186 "nvme_io": false, 00:08:10.186 "nvme_io_md": false, 00:08:10.186 "write_zeroes": true, 00:08:10.186 "zcopy": true, 00:08:10.186 "get_zone_info": false, 00:08:10.186 "zone_management": false, 00:08:10.186 "zone_append": false, 00:08:10.186 "compare": false, 00:08:10.186 "compare_and_write": false, 00:08:10.186 "abort": true, 00:08:10.186 "seek_hole": false, 00:08:10.186 "seek_data": false, 00:08:10.186 "copy": true, 00:08:10.186 "nvme_iov_md": false 00:08:10.186 }, 00:08:10.186 "memory_domains": [ 00:08:10.186 { 00:08:10.186 "dma_device_id": "system", 00:08:10.186 "dma_device_type": 1 00:08:10.186 }, 00:08:10.187 { 00:08:10.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.187 "dma_device_type": 2 00:08:10.187 } 00:08:10.187 ], 00:08:10.187 "driver_specific": {} 00:08:10.187 } 00:08:10.187 ] 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.187 "name": "Existed_Raid", 00:08:10.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.187 "strip_size_kb": 64, 00:08:10.187 "state": "configuring", 00:08:10.187 "raid_level": "raid0", 00:08:10.187 "superblock": false, 00:08:10.187 "num_base_bdevs": 3, 00:08:10.187 "num_base_bdevs_discovered": 1, 00:08:10.187 "num_base_bdevs_operational": 3, 00:08:10.187 "base_bdevs_list": [ 00:08:10.187 { 00:08:10.187 "name": "BaseBdev1", 00:08:10.187 "uuid": "e080e608-4fcf-4fdd-950c-585001f6a666", 00:08:10.187 "is_configured": true, 00:08:10.187 "data_offset": 0, 00:08:10.187 "data_size": 65536 00:08:10.187 }, 00:08:10.187 { 00:08:10.187 "name": "BaseBdev2", 00:08:10.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.187 "is_configured": false, 00:08:10.187 "data_offset": 0, 00:08:10.187 "data_size": 0 00:08:10.187 }, 00:08:10.187 { 00:08:10.187 "name": "BaseBdev3", 00:08:10.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.187 "is_configured": false, 00:08:10.187 "data_offset": 0, 00:08:10.187 "data_size": 0 00:08:10.187 } 00:08:10.187 ] 00:08:10.187 }' 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.187 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.447 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.447 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.447 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.447 [2024-12-06 23:42:21.994810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.447 [2024-12-06 23:42:21.994870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:10.447 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.447 23:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.447 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.447 23:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.447 [2024-12-06 23:42:22.006839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.708 [2024-12-06 23:42:22.008723] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.708 [2024-12-06 23:42:22.008763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.708 [2024-12-06 23:42:22.008772] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.708 [2024-12-06 23:42:22.008782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.708 "name": "Existed_Raid", 00:08:10.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.708 "strip_size_kb": 64, 00:08:10.708 "state": "configuring", 00:08:10.708 "raid_level": "raid0", 00:08:10.708 "superblock": false, 00:08:10.708 "num_base_bdevs": 3, 00:08:10.708 "num_base_bdevs_discovered": 1, 00:08:10.708 "num_base_bdevs_operational": 3, 00:08:10.708 "base_bdevs_list": [ 00:08:10.708 { 00:08:10.708 "name": "BaseBdev1", 00:08:10.708 "uuid": "e080e608-4fcf-4fdd-950c-585001f6a666", 00:08:10.708 "is_configured": true, 00:08:10.708 "data_offset": 0, 00:08:10.708 "data_size": 65536 00:08:10.708 }, 00:08:10.708 { 00:08:10.708 "name": "BaseBdev2", 00:08:10.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.708 "is_configured": false, 00:08:10.708 "data_offset": 0, 00:08:10.708 "data_size": 0 00:08:10.708 }, 00:08:10.708 { 00:08:10.708 "name": "BaseBdev3", 00:08:10.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.708 "is_configured": false, 00:08:10.708 "data_offset": 0, 00:08:10.708 "data_size": 0 00:08:10.708 } 00:08:10.708 ] 00:08:10.708 }' 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.708 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.968 [2024-12-06 23:42:22.518506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.968 BaseBdev2 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.968 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.228 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.228 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.228 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.228 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.228 [ 00:08:11.228 { 00:08:11.228 "name": "BaseBdev2", 00:08:11.228 "aliases": [ 00:08:11.228 "31b96463-c9c5-4a09-8ea8-eec9003066b9" 00:08:11.228 ], 00:08:11.229 "product_name": "Malloc disk", 00:08:11.229 "block_size": 512, 00:08:11.229 "num_blocks": 65536, 00:08:11.229 "uuid": "31b96463-c9c5-4a09-8ea8-eec9003066b9", 00:08:11.229 "assigned_rate_limits": { 00:08:11.229 "rw_ios_per_sec": 0, 00:08:11.229 "rw_mbytes_per_sec": 0, 00:08:11.229 "r_mbytes_per_sec": 0, 00:08:11.229 "w_mbytes_per_sec": 0 00:08:11.229 }, 00:08:11.229 "claimed": true, 00:08:11.229 "claim_type": "exclusive_write", 00:08:11.229 "zoned": false, 00:08:11.229 "supported_io_types": { 00:08:11.229 "read": true, 00:08:11.229 "write": true, 00:08:11.229 "unmap": true, 00:08:11.229 "flush": true, 00:08:11.229 "reset": true, 00:08:11.229 "nvme_admin": false, 00:08:11.229 "nvme_io": false, 00:08:11.229 "nvme_io_md": false, 00:08:11.229 "write_zeroes": true, 00:08:11.229 "zcopy": true, 00:08:11.229 "get_zone_info": false, 00:08:11.229 "zone_management": false, 00:08:11.229 "zone_append": false, 00:08:11.229 "compare": false, 00:08:11.229 "compare_and_write": false, 00:08:11.229 "abort": true, 00:08:11.229 "seek_hole": false, 00:08:11.229 "seek_data": false, 00:08:11.229 "copy": true, 00:08:11.229 "nvme_iov_md": false 00:08:11.229 }, 00:08:11.229 "memory_domains": [ 00:08:11.229 { 00:08:11.229 "dma_device_id": "system", 00:08:11.229 "dma_device_type": 1 00:08:11.229 }, 00:08:11.229 { 00:08:11.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.229 "dma_device_type": 2 00:08:11.229 } 00:08:11.229 ], 00:08:11.229 "driver_specific": {} 00:08:11.229 } 00:08:11.229 ] 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.229 "name": "Existed_Raid", 00:08:11.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.229 "strip_size_kb": 64, 00:08:11.229 "state": "configuring", 00:08:11.229 "raid_level": "raid0", 00:08:11.229 "superblock": false, 00:08:11.229 "num_base_bdevs": 3, 00:08:11.229 "num_base_bdevs_discovered": 2, 00:08:11.229 "num_base_bdevs_operational": 3, 00:08:11.229 "base_bdevs_list": [ 00:08:11.229 { 00:08:11.229 "name": "BaseBdev1", 00:08:11.229 "uuid": "e080e608-4fcf-4fdd-950c-585001f6a666", 00:08:11.229 "is_configured": true, 00:08:11.229 "data_offset": 0, 00:08:11.229 "data_size": 65536 00:08:11.229 }, 00:08:11.229 { 00:08:11.229 "name": "BaseBdev2", 00:08:11.229 "uuid": "31b96463-c9c5-4a09-8ea8-eec9003066b9", 00:08:11.229 "is_configured": true, 00:08:11.229 "data_offset": 0, 00:08:11.229 "data_size": 65536 00:08:11.229 }, 00:08:11.229 { 00:08:11.229 "name": "BaseBdev3", 00:08:11.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.229 "is_configured": false, 00:08:11.229 "data_offset": 0, 00:08:11.229 "data_size": 0 00:08:11.229 } 00:08:11.229 ] 00:08:11.229 }' 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.229 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.489 23:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:11.489 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.489 23:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.489 [2024-12-06 23:42:23.017509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.489 [2024-12-06 23:42:23.017558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:11.489 [2024-12-06 23:42:23.017588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:11.489 [2024-12-06 23:42:23.017874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:11.489 [2024-12-06 23:42:23.018055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:11.489 [2024-12-06 23:42:23.018073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:11.489 [2024-12-06 23:42:23.018336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.489 BaseBdev3 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.489 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.489 [ 00:08:11.489 { 00:08:11.489 "name": "BaseBdev3", 00:08:11.489 "aliases": [ 00:08:11.489 "9853ca8a-4ce6-4a42-981a-ab611304f406" 00:08:11.489 ], 00:08:11.489 "product_name": "Malloc disk", 00:08:11.489 "block_size": 512, 00:08:11.489 "num_blocks": 65536, 00:08:11.489 "uuid": "9853ca8a-4ce6-4a42-981a-ab611304f406", 00:08:11.489 "assigned_rate_limits": { 00:08:11.489 "rw_ios_per_sec": 0, 00:08:11.489 "rw_mbytes_per_sec": 0, 00:08:11.489 "r_mbytes_per_sec": 0, 00:08:11.489 "w_mbytes_per_sec": 0 00:08:11.489 }, 00:08:11.489 "claimed": true, 00:08:11.489 "claim_type": "exclusive_write", 00:08:11.489 "zoned": false, 00:08:11.489 "supported_io_types": { 00:08:11.489 "read": true, 00:08:11.489 "write": true, 00:08:11.489 "unmap": true, 00:08:11.489 "flush": true, 00:08:11.489 "reset": true, 00:08:11.489 "nvme_admin": false, 00:08:11.489 "nvme_io": false, 00:08:11.489 "nvme_io_md": false, 00:08:11.489 "write_zeroes": true, 00:08:11.489 "zcopy": true, 00:08:11.489 "get_zone_info": false, 00:08:11.489 "zone_management": false, 00:08:11.489 "zone_append": false, 00:08:11.489 "compare": false, 00:08:11.489 "compare_and_write": false, 00:08:11.489 "abort": true, 00:08:11.489 "seek_hole": false, 00:08:11.489 "seek_data": false, 00:08:11.750 "copy": true, 00:08:11.750 "nvme_iov_md": false 00:08:11.750 }, 00:08:11.750 "memory_domains": [ 00:08:11.750 { 00:08:11.750 "dma_device_id": "system", 00:08:11.750 "dma_device_type": 1 00:08:11.750 }, 00:08:11.750 { 00:08:11.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.750 "dma_device_type": 2 00:08:11.750 } 00:08:11.750 ], 00:08:11.750 "driver_specific": {} 00:08:11.750 } 00:08:11.750 ] 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.750 "name": "Existed_Raid", 00:08:11.750 "uuid": "3b8e72fe-aba2-44d1-9332-c5553da39f78", 00:08:11.750 "strip_size_kb": 64, 00:08:11.750 "state": "online", 00:08:11.750 "raid_level": "raid0", 00:08:11.750 "superblock": false, 00:08:11.750 "num_base_bdevs": 3, 00:08:11.750 "num_base_bdevs_discovered": 3, 00:08:11.750 "num_base_bdevs_operational": 3, 00:08:11.750 "base_bdevs_list": [ 00:08:11.750 { 00:08:11.750 "name": "BaseBdev1", 00:08:11.750 "uuid": "e080e608-4fcf-4fdd-950c-585001f6a666", 00:08:11.750 "is_configured": true, 00:08:11.750 "data_offset": 0, 00:08:11.750 "data_size": 65536 00:08:11.750 }, 00:08:11.750 { 00:08:11.750 "name": "BaseBdev2", 00:08:11.750 "uuid": "31b96463-c9c5-4a09-8ea8-eec9003066b9", 00:08:11.750 "is_configured": true, 00:08:11.750 "data_offset": 0, 00:08:11.750 "data_size": 65536 00:08:11.750 }, 00:08:11.750 { 00:08:11.750 "name": "BaseBdev3", 00:08:11.750 "uuid": "9853ca8a-4ce6-4a42-981a-ab611304f406", 00:08:11.750 "is_configured": true, 00:08:11.750 "data_offset": 0, 00:08:11.750 "data_size": 65536 00:08:11.750 } 00:08:11.750 ] 00:08:11.750 }' 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.750 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.009 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:12.009 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:12.009 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.009 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.009 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.009 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.009 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:12.009 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.009 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.009 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.009 [2024-12-06 23:42:23.525078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.009 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.009 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.009 "name": "Existed_Raid", 00:08:12.009 "aliases": [ 00:08:12.009 "3b8e72fe-aba2-44d1-9332-c5553da39f78" 00:08:12.009 ], 00:08:12.009 "product_name": "Raid Volume", 00:08:12.009 "block_size": 512, 00:08:12.009 "num_blocks": 196608, 00:08:12.009 "uuid": "3b8e72fe-aba2-44d1-9332-c5553da39f78", 00:08:12.009 "assigned_rate_limits": { 00:08:12.009 "rw_ios_per_sec": 0, 00:08:12.009 "rw_mbytes_per_sec": 0, 00:08:12.009 "r_mbytes_per_sec": 0, 00:08:12.009 "w_mbytes_per_sec": 0 00:08:12.009 }, 00:08:12.009 "claimed": false, 00:08:12.009 "zoned": false, 00:08:12.009 "supported_io_types": { 00:08:12.009 "read": true, 00:08:12.009 "write": true, 00:08:12.009 "unmap": true, 00:08:12.009 "flush": true, 00:08:12.009 "reset": true, 00:08:12.009 "nvme_admin": false, 00:08:12.009 "nvme_io": false, 00:08:12.009 "nvme_io_md": false, 00:08:12.009 "write_zeroes": true, 00:08:12.009 "zcopy": false, 00:08:12.009 "get_zone_info": false, 00:08:12.009 "zone_management": false, 00:08:12.009 "zone_append": false, 00:08:12.009 "compare": false, 00:08:12.009 "compare_and_write": false, 00:08:12.009 "abort": false, 00:08:12.009 "seek_hole": false, 00:08:12.009 "seek_data": false, 00:08:12.009 "copy": false, 00:08:12.010 "nvme_iov_md": false 00:08:12.010 }, 00:08:12.010 "memory_domains": [ 00:08:12.010 { 00:08:12.010 "dma_device_id": "system", 00:08:12.010 "dma_device_type": 1 00:08:12.010 }, 00:08:12.010 { 00:08:12.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.010 "dma_device_type": 2 00:08:12.010 }, 00:08:12.010 { 00:08:12.010 "dma_device_id": "system", 00:08:12.010 "dma_device_type": 1 00:08:12.010 }, 00:08:12.010 { 00:08:12.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.010 "dma_device_type": 2 00:08:12.010 }, 00:08:12.010 { 00:08:12.010 "dma_device_id": "system", 00:08:12.010 "dma_device_type": 1 00:08:12.010 }, 00:08:12.010 { 00:08:12.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.010 "dma_device_type": 2 00:08:12.010 } 00:08:12.010 ], 00:08:12.010 "driver_specific": { 00:08:12.010 "raid": { 00:08:12.010 "uuid": "3b8e72fe-aba2-44d1-9332-c5553da39f78", 00:08:12.010 "strip_size_kb": 64, 00:08:12.010 "state": "online", 00:08:12.010 "raid_level": "raid0", 00:08:12.010 "superblock": false, 00:08:12.010 "num_base_bdevs": 3, 00:08:12.010 "num_base_bdevs_discovered": 3, 00:08:12.010 "num_base_bdevs_operational": 3, 00:08:12.010 "base_bdevs_list": [ 00:08:12.010 { 00:08:12.010 "name": "BaseBdev1", 00:08:12.010 "uuid": "e080e608-4fcf-4fdd-950c-585001f6a666", 00:08:12.010 "is_configured": true, 00:08:12.010 "data_offset": 0, 00:08:12.010 "data_size": 65536 00:08:12.010 }, 00:08:12.010 { 00:08:12.010 "name": "BaseBdev2", 00:08:12.010 "uuid": "31b96463-c9c5-4a09-8ea8-eec9003066b9", 00:08:12.010 "is_configured": true, 00:08:12.010 "data_offset": 0, 00:08:12.010 "data_size": 65536 00:08:12.010 }, 00:08:12.010 { 00:08:12.010 "name": "BaseBdev3", 00:08:12.010 "uuid": "9853ca8a-4ce6-4a42-981a-ab611304f406", 00:08:12.010 "is_configured": true, 00:08:12.010 "data_offset": 0, 00:08:12.010 "data_size": 65536 00:08:12.010 } 00:08:12.010 ] 00:08:12.010 } 00:08:12.010 } 00:08:12.010 }' 00:08:12.010 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.268 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:12.268 BaseBdev2 00:08:12.268 BaseBdev3' 00:08:12.268 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.268 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.269 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.269 [2024-12-06 23:42:23.804296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.269 [2024-12-06 23:42:23.804392] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.269 [2024-12-06 23:42:23.804478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.528 "name": "Existed_Raid", 00:08:12.528 "uuid": "3b8e72fe-aba2-44d1-9332-c5553da39f78", 00:08:12.528 "strip_size_kb": 64, 00:08:12.528 "state": "offline", 00:08:12.528 "raid_level": "raid0", 00:08:12.528 "superblock": false, 00:08:12.528 "num_base_bdevs": 3, 00:08:12.528 "num_base_bdevs_discovered": 2, 00:08:12.528 "num_base_bdevs_operational": 2, 00:08:12.528 "base_bdevs_list": [ 00:08:12.528 { 00:08:12.528 "name": null, 00:08:12.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.528 "is_configured": false, 00:08:12.528 "data_offset": 0, 00:08:12.528 "data_size": 65536 00:08:12.528 }, 00:08:12.528 { 00:08:12.528 "name": "BaseBdev2", 00:08:12.528 "uuid": "31b96463-c9c5-4a09-8ea8-eec9003066b9", 00:08:12.528 "is_configured": true, 00:08:12.528 "data_offset": 0, 00:08:12.528 "data_size": 65536 00:08:12.528 }, 00:08:12.528 { 00:08:12.528 "name": "BaseBdev3", 00:08:12.528 "uuid": "9853ca8a-4ce6-4a42-981a-ab611304f406", 00:08:12.528 "is_configured": true, 00:08:12.528 "data_offset": 0, 00:08:12.528 "data_size": 65536 00:08:12.528 } 00:08:12.528 ] 00:08:12.528 }' 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.528 23:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.098 [2024-12-06 23:42:24.418852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.098 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.098 [2024-12-06 23:42:24.572889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:13.098 [2024-12-06 23:42:24.572985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.358 BaseBdev2 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.358 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.358 [ 00:08:13.358 { 00:08:13.358 "name": "BaseBdev2", 00:08:13.358 "aliases": [ 00:08:13.358 "8c1dfe62-f4d5-4a0f-846c-29f83c6b128d" 00:08:13.358 ], 00:08:13.358 "product_name": "Malloc disk", 00:08:13.358 "block_size": 512, 00:08:13.358 "num_blocks": 65536, 00:08:13.358 "uuid": "8c1dfe62-f4d5-4a0f-846c-29f83c6b128d", 00:08:13.358 "assigned_rate_limits": { 00:08:13.358 "rw_ios_per_sec": 0, 00:08:13.358 "rw_mbytes_per_sec": 0, 00:08:13.358 "r_mbytes_per_sec": 0, 00:08:13.358 "w_mbytes_per_sec": 0 00:08:13.358 }, 00:08:13.358 "claimed": false, 00:08:13.358 "zoned": false, 00:08:13.358 "supported_io_types": { 00:08:13.358 "read": true, 00:08:13.358 "write": true, 00:08:13.358 "unmap": true, 00:08:13.358 "flush": true, 00:08:13.358 "reset": true, 00:08:13.358 "nvme_admin": false, 00:08:13.358 "nvme_io": false, 00:08:13.358 "nvme_io_md": false, 00:08:13.358 "write_zeroes": true, 00:08:13.358 "zcopy": true, 00:08:13.358 "get_zone_info": false, 00:08:13.358 "zone_management": false, 00:08:13.358 "zone_append": false, 00:08:13.358 "compare": false, 00:08:13.358 "compare_and_write": false, 00:08:13.358 "abort": true, 00:08:13.358 "seek_hole": false, 00:08:13.358 "seek_data": false, 00:08:13.358 "copy": true, 00:08:13.358 "nvme_iov_md": false 00:08:13.358 }, 00:08:13.358 "memory_domains": [ 00:08:13.359 { 00:08:13.359 "dma_device_id": "system", 00:08:13.359 "dma_device_type": 1 00:08:13.359 }, 00:08:13.359 { 00:08:13.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.359 "dma_device_type": 2 00:08:13.359 } 00:08:13.359 ], 00:08:13.359 "driver_specific": {} 00:08:13.359 } 00:08:13.359 ] 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.359 BaseBdev3 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.359 [ 00:08:13.359 { 00:08:13.359 "name": "BaseBdev3", 00:08:13.359 "aliases": [ 00:08:13.359 "4e940e95-2bba-49d0-b94f-e4f490e90e3a" 00:08:13.359 ], 00:08:13.359 "product_name": "Malloc disk", 00:08:13.359 "block_size": 512, 00:08:13.359 "num_blocks": 65536, 00:08:13.359 "uuid": "4e940e95-2bba-49d0-b94f-e4f490e90e3a", 00:08:13.359 "assigned_rate_limits": { 00:08:13.359 "rw_ios_per_sec": 0, 00:08:13.359 "rw_mbytes_per_sec": 0, 00:08:13.359 "r_mbytes_per_sec": 0, 00:08:13.359 "w_mbytes_per_sec": 0 00:08:13.359 }, 00:08:13.359 "claimed": false, 00:08:13.359 "zoned": false, 00:08:13.359 "supported_io_types": { 00:08:13.359 "read": true, 00:08:13.359 "write": true, 00:08:13.359 "unmap": true, 00:08:13.359 "flush": true, 00:08:13.359 "reset": true, 00:08:13.359 "nvme_admin": false, 00:08:13.359 "nvme_io": false, 00:08:13.359 "nvme_io_md": false, 00:08:13.359 "write_zeroes": true, 00:08:13.359 "zcopy": true, 00:08:13.359 "get_zone_info": false, 00:08:13.359 "zone_management": false, 00:08:13.359 "zone_append": false, 00:08:13.359 "compare": false, 00:08:13.359 "compare_and_write": false, 00:08:13.359 "abort": true, 00:08:13.359 "seek_hole": false, 00:08:13.359 "seek_data": false, 00:08:13.359 "copy": true, 00:08:13.359 "nvme_iov_md": false 00:08:13.359 }, 00:08:13.359 "memory_domains": [ 00:08:13.359 { 00:08:13.359 "dma_device_id": "system", 00:08:13.359 "dma_device_type": 1 00:08:13.359 }, 00:08:13.359 { 00:08:13.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.359 "dma_device_type": 2 00:08:13.359 } 00:08:13.359 ], 00:08:13.359 "driver_specific": {} 00:08:13.359 } 00:08:13.359 ] 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.359 [2024-12-06 23:42:24.890324] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.359 [2024-12-06 23:42:24.890419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.359 [2024-12-06 23:42:24.890463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.359 [2024-12-06 23:42:24.892308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.359 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.618 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.618 "name": "Existed_Raid", 00:08:13.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.618 "strip_size_kb": 64, 00:08:13.618 "state": "configuring", 00:08:13.618 "raid_level": "raid0", 00:08:13.618 "superblock": false, 00:08:13.618 "num_base_bdevs": 3, 00:08:13.618 "num_base_bdevs_discovered": 2, 00:08:13.618 "num_base_bdevs_operational": 3, 00:08:13.618 "base_bdevs_list": [ 00:08:13.618 { 00:08:13.618 "name": "BaseBdev1", 00:08:13.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.618 "is_configured": false, 00:08:13.618 "data_offset": 0, 00:08:13.618 "data_size": 0 00:08:13.618 }, 00:08:13.618 { 00:08:13.618 "name": "BaseBdev2", 00:08:13.618 "uuid": "8c1dfe62-f4d5-4a0f-846c-29f83c6b128d", 00:08:13.618 "is_configured": true, 00:08:13.618 "data_offset": 0, 00:08:13.618 "data_size": 65536 00:08:13.618 }, 00:08:13.618 { 00:08:13.618 "name": "BaseBdev3", 00:08:13.618 "uuid": "4e940e95-2bba-49d0-b94f-e4f490e90e3a", 00:08:13.618 "is_configured": true, 00:08:13.618 "data_offset": 0, 00:08:13.618 "data_size": 65536 00:08:13.618 } 00:08:13.618 ] 00:08:13.618 }' 00:08:13.618 23:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.618 23:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.877 [2024-12-06 23:42:25.365534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.877 "name": "Existed_Raid", 00:08:13.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.877 "strip_size_kb": 64, 00:08:13.877 "state": "configuring", 00:08:13.877 "raid_level": "raid0", 00:08:13.877 "superblock": false, 00:08:13.877 "num_base_bdevs": 3, 00:08:13.877 "num_base_bdevs_discovered": 1, 00:08:13.877 "num_base_bdevs_operational": 3, 00:08:13.877 "base_bdevs_list": [ 00:08:13.877 { 00:08:13.877 "name": "BaseBdev1", 00:08:13.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.877 "is_configured": false, 00:08:13.877 "data_offset": 0, 00:08:13.877 "data_size": 0 00:08:13.877 }, 00:08:13.877 { 00:08:13.877 "name": null, 00:08:13.877 "uuid": "8c1dfe62-f4d5-4a0f-846c-29f83c6b128d", 00:08:13.877 "is_configured": false, 00:08:13.877 "data_offset": 0, 00:08:13.877 "data_size": 65536 00:08:13.877 }, 00:08:13.877 { 00:08:13.877 "name": "BaseBdev3", 00:08:13.877 "uuid": "4e940e95-2bba-49d0-b94f-e4f490e90e3a", 00:08:13.877 "is_configured": true, 00:08:13.877 "data_offset": 0, 00:08:13.877 "data_size": 65536 00:08:13.877 } 00:08:13.877 ] 00:08:13.877 }' 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.877 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.470 [2024-12-06 23:42:25.885990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.470 BaseBdev1 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.470 [ 00:08:14.470 { 00:08:14.470 "name": "BaseBdev1", 00:08:14.470 "aliases": [ 00:08:14.470 "c43e839f-f914-4fa4-b0a5-5712ba9e082d" 00:08:14.470 ], 00:08:14.470 "product_name": "Malloc disk", 00:08:14.470 "block_size": 512, 00:08:14.470 "num_blocks": 65536, 00:08:14.470 "uuid": "c43e839f-f914-4fa4-b0a5-5712ba9e082d", 00:08:14.470 "assigned_rate_limits": { 00:08:14.470 "rw_ios_per_sec": 0, 00:08:14.470 "rw_mbytes_per_sec": 0, 00:08:14.470 "r_mbytes_per_sec": 0, 00:08:14.470 "w_mbytes_per_sec": 0 00:08:14.470 }, 00:08:14.470 "claimed": true, 00:08:14.470 "claim_type": "exclusive_write", 00:08:14.470 "zoned": false, 00:08:14.470 "supported_io_types": { 00:08:14.470 "read": true, 00:08:14.470 "write": true, 00:08:14.470 "unmap": true, 00:08:14.470 "flush": true, 00:08:14.470 "reset": true, 00:08:14.470 "nvme_admin": false, 00:08:14.470 "nvme_io": false, 00:08:14.470 "nvme_io_md": false, 00:08:14.470 "write_zeroes": true, 00:08:14.470 "zcopy": true, 00:08:14.470 "get_zone_info": false, 00:08:14.470 "zone_management": false, 00:08:14.470 "zone_append": false, 00:08:14.470 "compare": false, 00:08:14.470 "compare_and_write": false, 00:08:14.470 "abort": true, 00:08:14.470 "seek_hole": false, 00:08:14.470 "seek_data": false, 00:08:14.470 "copy": true, 00:08:14.470 "nvme_iov_md": false 00:08:14.470 }, 00:08:14.470 "memory_domains": [ 00:08:14.470 { 00:08:14.470 "dma_device_id": "system", 00:08:14.470 "dma_device_type": 1 00:08:14.470 }, 00:08:14.470 { 00:08:14.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.470 "dma_device_type": 2 00:08:14.470 } 00:08:14.470 ], 00:08:14.470 "driver_specific": {} 00:08:14.470 } 00:08:14.470 ] 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.470 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.471 "name": "Existed_Raid", 00:08:14.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.471 "strip_size_kb": 64, 00:08:14.471 "state": "configuring", 00:08:14.471 "raid_level": "raid0", 00:08:14.471 "superblock": false, 00:08:14.471 "num_base_bdevs": 3, 00:08:14.471 "num_base_bdevs_discovered": 2, 00:08:14.471 "num_base_bdevs_operational": 3, 00:08:14.471 "base_bdevs_list": [ 00:08:14.471 { 00:08:14.471 "name": "BaseBdev1", 00:08:14.471 "uuid": "c43e839f-f914-4fa4-b0a5-5712ba9e082d", 00:08:14.471 "is_configured": true, 00:08:14.471 "data_offset": 0, 00:08:14.471 "data_size": 65536 00:08:14.471 }, 00:08:14.471 { 00:08:14.471 "name": null, 00:08:14.471 "uuid": "8c1dfe62-f4d5-4a0f-846c-29f83c6b128d", 00:08:14.471 "is_configured": false, 00:08:14.471 "data_offset": 0, 00:08:14.471 "data_size": 65536 00:08:14.471 }, 00:08:14.471 { 00:08:14.471 "name": "BaseBdev3", 00:08:14.471 "uuid": "4e940e95-2bba-49d0-b94f-e4f490e90e3a", 00:08:14.471 "is_configured": true, 00:08:14.471 "data_offset": 0, 00:08:14.471 "data_size": 65536 00:08:14.471 } 00:08:14.471 ] 00:08:14.471 }' 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.471 23:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.056 [2024-12-06 23:42:26.413137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.056 "name": "Existed_Raid", 00:08:15.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.056 "strip_size_kb": 64, 00:08:15.056 "state": "configuring", 00:08:15.056 "raid_level": "raid0", 00:08:15.056 "superblock": false, 00:08:15.056 "num_base_bdevs": 3, 00:08:15.056 "num_base_bdevs_discovered": 1, 00:08:15.056 "num_base_bdevs_operational": 3, 00:08:15.056 "base_bdevs_list": [ 00:08:15.056 { 00:08:15.056 "name": "BaseBdev1", 00:08:15.056 "uuid": "c43e839f-f914-4fa4-b0a5-5712ba9e082d", 00:08:15.056 "is_configured": true, 00:08:15.056 "data_offset": 0, 00:08:15.056 "data_size": 65536 00:08:15.056 }, 00:08:15.056 { 00:08:15.056 "name": null, 00:08:15.056 "uuid": "8c1dfe62-f4d5-4a0f-846c-29f83c6b128d", 00:08:15.056 "is_configured": false, 00:08:15.056 "data_offset": 0, 00:08:15.056 "data_size": 65536 00:08:15.056 }, 00:08:15.056 { 00:08:15.056 "name": null, 00:08:15.056 "uuid": "4e940e95-2bba-49d0-b94f-e4f490e90e3a", 00:08:15.056 "is_configured": false, 00:08:15.056 "data_offset": 0, 00:08:15.056 "data_size": 65536 00:08:15.056 } 00:08:15.056 ] 00:08:15.056 }' 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.056 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.316 [2024-12-06 23:42:26.852400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.316 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.576 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.576 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.576 "name": "Existed_Raid", 00:08:15.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.576 "strip_size_kb": 64, 00:08:15.576 "state": "configuring", 00:08:15.576 "raid_level": "raid0", 00:08:15.576 "superblock": false, 00:08:15.576 "num_base_bdevs": 3, 00:08:15.577 "num_base_bdevs_discovered": 2, 00:08:15.577 "num_base_bdevs_operational": 3, 00:08:15.577 "base_bdevs_list": [ 00:08:15.577 { 00:08:15.577 "name": "BaseBdev1", 00:08:15.577 "uuid": "c43e839f-f914-4fa4-b0a5-5712ba9e082d", 00:08:15.577 "is_configured": true, 00:08:15.577 "data_offset": 0, 00:08:15.577 "data_size": 65536 00:08:15.577 }, 00:08:15.577 { 00:08:15.577 "name": null, 00:08:15.577 "uuid": "8c1dfe62-f4d5-4a0f-846c-29f83c6b128d", 00:08:15.577 "is_configured": false, 00:08:15.577 "data_offset": 0, 00:08:15.577 "data_size": 65536 00:08:15.577 }, 00:08:15.577 { 00:08:15.577 "name": "BaseBdev3", 00:08:15.577 "uuid": "4e940e95-2bba-49d0-b94f-e4f490e90e3a", 00:08:15.577 "is_configured": true, 00:08:15.577 "data_offset": 0, 00:08:15.577 "data_size": 65536 00:08:15.577 } 00:08:15.577 ] 00:08:15.577 }' 00:08:15.577 23:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.577 23:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.836 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.836 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.836 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.836 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.836 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.836 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:15.836 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.836 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.836 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.836 [2024-12-06 23:42:27.323649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.096 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.096 "name": "Existed_Raid", 00:08:16.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.097 "strip_size_kb": 64, 00:08:16.097 "state": "configuring", 00:08:16.097 "raid_level": "raid0", 00:08:16.097 "superblock": false, 00:08:16.097 "num_base_bdevs": 3, 00:08:16.097 "num_base_bdevs_discovered": 1, 00:08:16.097 "num_base_bdevs_operational": 3, 00:08:16.097 "base_bdevs_list": [ 00:08:16.097 { 00:08:16.097 "name": null, 00:08:16.097 "uuid": "c43e839f-f914-4fa4-b0a5-5712ba9e082d", 00:08:16.097 "is_configured": false, 00:08:16.097 "data_offset": 0, 00:08:16.097 "data_size": 65536 00:08:16.097 }, 00:08:16.097 { 00:08:16.097 "name": null, 00:08:16.097 "uuid": "8c1dfe62-f4d5-4a0f-846c-29f83c6b128d", 00:08:16.097 "is_configured": false, 00:08:16.097 "data_offset": 0, 00:08:16.097 "data_size": 65536 00:08:16.097 }, 00:08:16.097 { 00:08:16.097 "name": "BaseBdev3", 00:08:16.097 "uuid": "4e940e95-2bba-49d0-b94f-e4f490e90e3a", 00:08:16.097 "is_configured": true, 00:08:16.097 "data_offset": 0, 00:08:16.097 "data_size": 65536 00:08:16.097 } 00:08:16.097 ] 00:08:16.097 }' 00:08:16.097 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.097 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.357 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.357 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:16.357 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.357 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.357 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.618 [2024-12-06 23:42:27.941380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.618 "name": "Existed_Raid", 00:08:16.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.618 "strip_size_kb": 64, 00:08:16.618 "state": "configuring", 00:08:16.618 "raid_level": "raid0", 00:08:16.618 "superblock": false, 00:08:16.618 "num_base_bdevs": 3, 00:08:16.618 "num_base_bdevs_discovered": 2, 00:08:16.618 "num_base_bdevs_operational": 3, 00:08:16.618 "base_bdevs_list": [ 00:08:16.618 { 00:08:16.618 "name": null, 00:08:16.618 "uuid": "c43e839f-f914-4fa4-b0a5-5712ba9e082d", 00:08:16.618 "is_configured": false, 00:08:16.618 "data_offset": 0, 00:08:16.618 "data_size": 65536 00:08:16.618 }, 00:08:16.618 { 00:08:16.618 "name": "BaseBdev2", 00:08:16.618 "uuid": "8c1dfe62-f4d5-4a0f-846c-29f83c6b128d", 00:08:16.618 "is_configured": true, 00:08:16.618 "data_offset": 0, 00:08:16.618 "data_size": 65536 00:08:16.618 }, 00:08:16.618 { 00:08:16.618 "name": "BaseBdev3", 00:08:16.618 "uuid": "4e940e95-2bba-49d0-b94f-e4f490e90e3a", 00:08:16.618 "is_configured": true, 00:08:16.618 "data_offset": 0, 00:08:16.618 "data_size": 65536 00:08:16.618 } 00:08:16.618 ] 00:08:16.618 }' 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.618 23:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c43e839f-f914-4fa4-b0a5-5712ba9e082d 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.878 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.139 [2024-12-06 23:42:28.464319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:17.139 [2024-12-06 23:42:28.464356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:17.139 [2024-12-06 23:42:28.464365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:17.139 [2024-12-06 23:42:28.464584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:17.139 [2024-12-06 23:42:28.464762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:17.139 [2024-12-06 23:42:28.464772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:17.139 [2024-12-06 23:42:28.465025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.139 NewBaseBdev 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.139 [ 00:08:17.139 { 00:08:17.139 "name": "NewBaseBdev", 00:08:17.139 "aliases": [ 00:08:17.139 "c43e839f-f914-4fa4-b0a5-5712ba9e082d" 00:08:17.139 ], 00:08:17.139 "product_name": "Malloc disk", 00:08:17.139 "block_size": 512, 00:08:17.139 "num_blocks": 65536, 00:08:17.139 "uuid": "c43e839f-f914-4fa4-b0a5-5712ba9e082d", 00:08:17.139 "assigned_rate_limits": { 00:08:17.139 "rw_ios_per_sec": 0, 00:08:17.139 "rw_mbytes_per_sec": 0, 00:08:17.139 "r_mbytes_per_sec": 0, 00:08:17.139 "w_mbytes_per_sec": 0 00:08:17.139 }, 00:08:17.139 "claimed": true, 00:08:17.139 "claim_type": "exclusive_write", 00:08:17.139 "zoned": false, 00:08:17.139 "supported_io_types": { 00:08:17.139 "read": true, 00:08:17.139 "write": true, 00:08:17.139 "unmap": true, 00:08:17.139 "flush": true, 00:08:17.139 "reset": true, 00:08:17.139 "nvme_admin": false, 00:08:17.139 "nvme_io": false, 00:08:17.139 "nvme_io_md": false, 00:08:17.139 "write_zeroes": true, 00:08:17.139 "zcopy": true, 00:08:17.139 "get_zone_info": false, 00:08:17.139 "zone_management": false, 00:08:17.139 "zone_append": false, 00:08:17.139 "compare": false, 00:08:17.139 "compare_and_write": false, 00:08:17.139 "abort": true, 00:08:17.139 "seek_hole": false, 00:08:17.139 "seek_data": false, 00:08:17.139 "copy": true, 00:08:17.139 "nvme_iov_md": false 00:08:17.139 }, 00:08:17.139 "memory_domains": [ 00:08:17.139 { 00:08:17.139 "dma_device_id": "system", 00:08:17.139 "dma_device_type": 1 00:08:17.139 }, 00:08:17.139 { 00:08:17.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.139 "dma_device_type": 2 00:08:17.139 } 00:08:17.139 ], 00:08:17.139 "driver_specific": {} 00:08:17.139 } 00:08:17.139 ] 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.139 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.140 "name": "Existed_Raid", 00:08:17.140 "uuid": "bc0eaef3-2c1a-42fd-ad90-bdb593306a18", 00:08:17.140 "strip_size_kb": 64, 00:08:17.140 "state": "online", 00:08:17.140 "raid_level": "raid0", 00:08:17.140 "superblock": false, 00:08:17.140 "num_base_bdevs": 3, 00:08:17.140 "num_base_bdevs_discovered": 3, 00:08:17.140 "num_base_bdevs_operational": 3, 00:08:17.140 "base_bdevs_list": [ 00:08:17.140 { 00:08:17.140 "name": "NewBaseBdev", 00:08:17.140 "uuid": "c43e839f-f914-4fa4-b0a5-5712ba9e082d", 00:08:17.140 "is_configured": true, 00:08:17.140 "data_offset": 0, 00:08:17.140 "data_size": 65536 00:08:17.140 }, 00:08:17.140 { 00:08:17.140 "name": "BaseBdev2", 00:08:17.140 "uuid": "8c1dfe62-f4d5-4a0f-846c-29f83c6b128d", 00:08:17.140 "is_configured": true, 00:08:17.140 "data_offset": 0, 00:08:17.140 "data_size": 65536 00:08:17.140 }, 00:08:17.140 { 00:08:17.140 "name": "BaseBdev3", 00:08:17.140 "uuid": "4e940e95-2bba-49d0-b94f-e4f490e90e3a", 00:08:17.140 "is_configured": true, 00:08:17.140 "data_offset": 0, 00:08:17.140 "data_size": 65536 00:08:17.140 } 00:08:17.140 ] 00:08:17.140 }' 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.140 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.711 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:17.711 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:17.711 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.711 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.711 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.711 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.711 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:17.711 23:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.711 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.711 23:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.711 [2024-12-06 23:42:28.979853] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.711 "name": "Existed_Raid", 00:08:17.711 "aliases": [ 00:08:17.711 "bc0eaef3-2c1a-42fd-ad90-bdb593306a18" 00:08:17.711 ], 00:08:17.711 "product_name": "Raid Volume", 00:08:17.711 "block_size": 512, 00:08:17.711 "num_blocks": 196608, 00:08:17.711 "uuid": "bc0eaef3-2c1a-42fd-ad90-bdb593306a18", 00:08:17.711 "assigned_rate_limits": { 00:08:17.711 "rw_ios_per_sec": 0, 00:08:17.711 "rw_mbytes_per_sec": 0, 00:08:17.711 "r_mbytes_per_sec": 0, 00:08:17.711 "w_mbytes_per_sec": 0 00:08:17.711 }, 00:08:17.711 "claimed": false, 00:08:17.711 "zoned": false, 00:08:17.711 "supported_io_types": { 00:08:17.711 "read": true, 00:08:17.711 "write": true, 00:08:17.711 "unmap": true, 00:08:17.711 "flush": true, 00:08:17.711 "reset": true, 00:08:17.711 "nvme_admin": false, 00:08:17.711 "nvme_io": false, 00:08:17.711 "nvme_io_md": false, 00:08:17.711 "write_zeroes": true, 00:08:17.711 "zcopy": false, 00:08:17.711 "get_zone_info": false, 00:08:17.711 "zone_management": false, 00:08:17.711 "zone_append": false, 00:08:17.711 "compare": false, 00:08:17.711 "compare_and_write": false, 00:08:17.711 "abort": false, 00:08:17.711 "seek_hole": false, 00:08:17.711 "seek_data": false, 00:08:17.711 "copy": false, 00:08:17.711 "nvme_iov_md": false 00:08:17.711 }, 00:08:17.711 "memory_domains": [ 00:08:17.711 { 00:08:17.711 "dma_device_id": "system", 00:08:17.711 "dma_device_type": 1 00:08:17.711 }, 00:08:17.711 { 00:08:17.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.711 "dma_device_type": 2 00:08:17.711 }, 00:08:17.711 { 00:08:17.711 "dma_device_id": "system", 00:08:17.711 "dma_device_type": 1 00:08:17.711 }, 00:08:17.711 { 00:08:17.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.711 "dma_device_type": 2 00:08:17.711 }, 00:08:17.711 { 00:08:17.711 "dma_device_id": "system", 00:08:17.711 "dma_device_type": 1 00:08:17.711 }, 00:08:17.711 { 00:08:17.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.711 "dma_device_type": 2 00:08:17.711 } 00:08:17.711 ], 00:08:17.711 "driver_specific": { 00:08:17.711 "raid": { 00:08:17.711 "uuid": "bc0eaef3-2c1a-42fd-ad90-bdb593306a18", 00:08:17.711 "strip_size_kb": 64, 00:08:17.711 "state": "online", 00:08:17.711 "raid_level": "raid0", 00:08:17.711 "superblock": false, 00:08:17.711 "num_base_bdevs": 3, 00:08:17.711 "num_base_bdevs_discovered": 3, 00:08:17.711 "num_base_bdevs_operational": 3, 00:08:17.711 "base_bdevs_list": [ 00:08:17.711 { 00:08:17.711 "name": "NewBaseBdev", 00:08:17.711 "uuid": "c43e839f-f914-4fa4-b0a5-5712ba9e082d", 00:08:17.711 "is_configured": true, 00:08:17.711 "data_offset": 0, 00:08:17.711 "data_size": 65536 00:08:17.711 }, 00:08:17.711 { 00:08:17.711 "name": "BaseBdev2", 00:08:17.711 "uuid": "8c1dfe62-f4d5-4a0f-846c-29f83c6b128d", 00:08:17.711 "is_configured": true, 00:08:17.711 "data_offset": 0, 00:08:17.711 "data_size": 65536 00:08:17.711 }, 00:08:17.711 { 00:08:17.711 "name": "BaseBdev3", 00:08:17.711 "uuid": "4e940e95-2bba-49d0-b94f-e4f490e90e3a", 00:08:17.711 "is_configured": true, 00:08:17.711 "data_offset": 0, 00:08:17.711 "data_size": 65536 00:08:17.711 } 00:08:17.711 ] 00:08:17.711 } 00:08:17.711 } 00:08:17.711 }' 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:17.711 BaseBdev2 00:08:17.711 BaseBdev3' 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.711 [2024-12-06 23:42:29.259072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.711 [2024-12-06 23:42:29.259154] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.711 [2024-12-06 23:42:29.259286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.711 [2024-12-06 23:42:29.259376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.711 [2024-12-06 23:42:29.259430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63737 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63737 ']' 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63737 00:08:17.711 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:17.971 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.971 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63737 00:08:17.971 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.971 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.971 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63737' 00:08:17.971 killing process with pid 63737 00:08:17.971 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63737 00:08:17.971 [2024-12-06 23:42:29.307497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.971 23:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63737 00:08:18.231 [2024-12-06 23:42:29.590055] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.172 23:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:19.172 00:08:19.172 real 0m10.644s 00:08:19.172 user 0m17.056s 00:08:19.172 sys 0m1.840s 00:08:19.172 23:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.172 23:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.172 ************************************ 00:08:19.172 END TEST raid_state_function_test 00:08:19.172 ************************************ 00:08:19.431 23:42:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:19.431 23:42:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:19.431 23:42:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.431 23:42:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.431 ************************************ 00:08:19.431 START TEST raid_state_function_test_sb 00:08:19.431 ************************************ 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:19.432 Process raid pid: 64358 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64358 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64358' 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64358 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64358 ']' 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.432 23:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.432 [2024-12-06 23:42:30.856921] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:08:19.432 [2024-12-06 23:42:30.857143] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.690 [2024-12-06 23:42:31.013910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.690 [2024-12-06 23:42:31.130388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.949 [2024-12-06 23:42:31.338194] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.949 [2024-12-06 23:42:31.338312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.210 23:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.210 23:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:20.210 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:20.210 23:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.210 23:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.210 [2024-12-06 23:42:31.687153] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:20.210 [2024-12-06 23:42:31.687276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:20.210 [2024-12-06 23:42:31.687312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:20.210 [2024-12-06 23:42:31.687338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:20.211 [2024-12-06 23:42:31.687357] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:20.211 [2024-12-06 23:42:31.687379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.211 "name": "Existed_Raid", 00:08:20.211 "uuid": "a2ee6985-3af6-42da-8440-dacaf4c43c40", 00:08:20.211 "strip_size_kb": 64, 00:08:20.211 "state": "configuring", 00:08:20.211 "raid_level": "raid0", 00:08:20.211 "superblock": true, 00:08:20.211 "num_base_bdevs": 3, 00:08:20.211 "num_base_bdevs_discovered": 0, 00:08:20.211 "num_base_bdevs_operational": 3, 00:08:20.211 "base_bdevs_list": [ 00:08:20.211 { 00:08:20.211 "name": "BaseBdev1", 00:08:20.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.211 "is_configured": false, 00:08:20.211 "data_offset": 0, 00:08:20.211 "data_size": 0 00:08:20.211 }, 00:08:20.211 { 00:08:20.211 "name": "BaseBdev2", 00:08:20.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.211 "is_configured": false, 00:08:20.211 "data_offset": 0, 00:08:20.211 "data_size": 0 00:08:20.211 }, 00:08:20.211 { 00:08:20.211 "name": "BaseBdev3", 00:08:20.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.211 "is_configured": false, 00:08:20.211 "data_offset": 0, 00:08:20.211 "data_size": 0 00:08:20.211 } 00:08:20.211 ] 00:08:20.211 }' 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.211 23:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.813 [2024-12-06 23:42:32.138874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:20.813 [2024-12-06 23:42:32.138969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.813 [2024-12-06 23:42:32.150864] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:20.813 [2024-12-06 23:42:32.150955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:20.813 [2024-12-06 23:42:32.150982] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:20.813 [2024-12-06 23:42:32.151006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:20.813 [2024-12-06 23:42:32.151025] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:20.813 [2024-12-06 23:42:32.151047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.813 [2024-12-06 23:42:32.198823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.813 BaseBdev1 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.813 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.813 [ 00:08:20.813 { 00:08:20.813 "name": "BaseBdev1", 00:08:20.813 "aliases": [ 00:08:20.813 "d0656e28-0178-49b5-82e0-5034291a9a43" 00:08:20.813 ], 00:08:20.813 "product_name": "Malloc disk", 00:08:20.813 "block_size": 512, 00:08:20.813 "num_blocks": 65536, 00:08:20.813 "uuid": "d0656e28-0178-49b5-82e0-5034291a9a43", 00:08:20.813 "assigned_rate_limits": { 00:08:20.813 "rw_ios_per_sec": 0, 00:08:20.813 "rw_mbytes_per_sec": 0, 00:08:20.813 "r_mbytes_per_sec": 0, 00:08:20.813 "w_mbytes_per_sec": 0 00:08:20.813 }, 00:08:20.813 "claimed": true, 00:08:20.813 "claim_type": "exclusive_write", 00:08:20.813 "zoned": false, 00:08:20.813 "supported_io_types": { 00:08:20.813 "read": true, 00:08:20.813 "write": true, 00:08:20.813 "unmap": true, 00:08:20.813 "flush": true, 00:08:20.813 "reset": true, 00:08:20.813 "nvme_admin": false, 00:08:20.813 "nvme_io": false, 00:08:20.813 "nvme_io_md": false, 00:08:20.813 "write_zeroes": true, 00:08:20.813 "zcopy": true, 00:08:20.813 "get_zone_info": false, 00:08:20.813 "zone_management": false, 00:08:20.813 "zone_append": false, 00:08:20.813 "compare": false, 00:08:20.814 "compare_and_write": false, 00:08:20.814 "abort": true, 00:08:20.814 "seek_hole": false, 00:08:20.814 "seek_data": false, 00:08:20.814 "copy": true, 00:08:20.814 "nvme_iov_md": false 00:08:20.814 }, 00:08:20.814 "memory_domains": [ 00:08:20.814 { 00:08:20.814 "dma_device_id": "system", 00:08:20.814 "dma_device_type": 1 00:08:20.814 }, 00:08:20.814 { 00:08:20.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.814 "dma_device_type": 2 00:08:20.814 } 00:08:20.814 ], 00:08:20.814 "driver_specific": {} 00:08:20.814 } 00:08:20.814 ] 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.814 "name": "Existed_Raid", 00:08:20.814 "uuid": "1c875c5a-7687-4c2e-8544-986447296d2e", 00:08:20.814 "strip_size_kb": 64, 00:08:20.814 "state": "configuring", 00:08:20.814 "raid_level": "raid0", 00:08:20.814 "superblock": true, 00:08:20.814 "num_base_bdevs": 3, 00:08:20.814 "num_base_bdevs_discovered": 1, 00:08:20.814 "num_base_bdevs_operational": 3, 00:08:20.814 "base_bdevs_list": [ 00:08:20.814 { 00:08:20.814 "name": "BaseBdev1", 00:08:20.814 "uuid": "d0656e28-0178-49b5-82e0-5034291a9a43", 00:08:20.814 "is_configured": true, 00:08:20.814 "data_offset": 2048, 00:08:20.814 "data_size": 63488 00:08:20.814 }, 00:08:20.814 { 00:08:20.814 "name": "BaseBdev2", 00:08:20.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.814 "is_configured": false, 00:08:20.814 "data_offset": 0, 00:08:20.814 "data_size": 0 00:08:20.814 }, 00:08:20.814 { 00:08:20.814 "name": "BaseBdev3", 00:08:20.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.814 "is_configured": false, 00:08:20.814 "data_offset": 0, 00:08:20.814 "data_size": 0 00:08:20.814 } 00:08:20.814 ] 00:08:20.814 }' 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.814 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.073 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.073 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.073 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.332 [2024-12-06 23:42:32.634833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.332 [2024-12-06 23:42:32.634963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.332 [2024-12-06 23:42:32.642890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.332 [2024-12-06 23:42:32.644848] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.332 [2024-12-06 23:42:32.644940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.332 [2024-12-06 23:42:32.644955] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.332 [2024-12-06 23:42:32.644966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.332 "name": "Existed_Raid", 00:08:21.332 "uuid": "771a2ab5-9486-4b9b-978b-d75725cfdc17", 00:08:21.332 "strip_size_kb": 64, 00:08:21.332 "state": "configuring", 00:08:21.332 "raid_level": "raid0", 00:08:21.332 "superblock": true, 00:08:21.332 "num_base_bdevs": 3, 00:08:21.332 "num_base_bdevs_discovered": 1, 00:08:21.332 "num_base_bdevs_operational": 3, 00:08:21.332 "base_bdevs_list": [ 00:08:21.332 { 00:08:21.332 "name": "BaseBdev1", 00:08:21.332 "uuid": "d0656e28-0178-49b5-82e0-5034291a9a43", 00:08:21.332 "is_configured": true, 00:08:21.332 "data_offset": 2048, 00:08:21.332 "data_size": 63488 00:08:21.332 }, 00:08:21.332 { 00:08:21.332 "name": "BaseBdev2", 00:08:21.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.332 "is_configured": false, 00:08:21.332 "data_offset": 0, 00:08:21.332 "data_size": 0 00:08:21.332 }, 00:08:21.332 { 00:08:21.332 "name": "BaseBdev3", 00:08:21.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.332 "is_configured": false, 00:08:21.332 "data_offset": 0, 00:08:21.332 "data_size": 0 00:08:21.332 } 00:08:21.332 ] 00:08:21.332 }' 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.332 23:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.594 [2024-12-06 23:42:33.092933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:21.594 BaseBdev2 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.594 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.594 [ 00:08:21.594 { 00:08:21.594 "name": "BaseBdev2", 00:08:21.594 "aliases": [ 00:08:21.594 "0130a497-5ccb-45e5-a066-97c20e81ef53" 00:08:21.594 ], 00:08:21.594 "product_name": "Malloc disk", 00:08:21.594 "block_size": 512, 00:08:21.594 "num_blocks": 65536, 00:08:21.594 "uuid": "0130a497-5ccb-45e5-a066-97c20e81ef53", 00:08:21.594 "assigned_rate_limits": { 00:08:21.594 "rw_ios_per_sec": 0, 00:08:21.594 "rw_mbytes_per_sec": 0, 00:08:21.594 "r_mbytes_per_sec": 0, 00:08:21.594 "w_mbytes_per_sec": 0 00:08:21.594 }, 00:08:21.594 "claimed": true, 00:08:21.594 "claim_type": "exclusive_write", 00:08:21.594 "zoned": false, 00:08:21.594 "supported_io_types": { 00:08:21.594 "read": true, 00:08:21.594 "write": true, 00:08:21.594 "unmap": true, 00:08:21.594 "flush": true, 00:08:21.594 "reset": true, 00:08:21.594 "nvme_admin": false, 00:08:21.594 "nvme_io": false, 00:08:21.594 "nvme_io_md": false, 00:08:21.594 "write_zeroes": true, 00:08:21.594 "zcopy": true, 00:08:21.594 "get_zone_info": false, 00:08:21.594 "zone_management": false, 00:08:21.594 "zone_append": false, 00:08:21.594 "compare": false, 00:08:21.594 "compare_and_write": false, 00:08:21.594 "abort": true, 00:08:21.594 "seek_hole": false, 00:08:21.594 "seek_data": false, 00:08:21.594 "copy": true, 00:08:21.594 "nvme_iov_md": false 00:08:21.594 }, 00:08:21.594 "memory_domains": [ 00:08:21.594 { 00:08:21.594 "dma_device_id": "system", 00:08:21.594 "dma_device_type": 1 00:08:21.594 }, 00:08:21.594 { 00:08:21.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.594 "dma_device_type": 2 00:08:21.594 } 00:08:21.594 ], 00:08:21.595 "driver_specific": {} 00:08:21.595 } 00:08:21.595 ] 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.595 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.855 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.855 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.855 "name": "Existed_Raid", 00:08:21.855 "uuid": "771a2ab5-9486-4b9b-978b-d75725cfdc17", 00:08:21.855 "strip_size_kb": 64, 00:08:21.855 "state": "configuring", 00:08:21.855 "raid_level": "raid0", 00:08:21.855 "superblock": true, 00:08:21.855 "num_base_bdevs": 3, 00:08:21.855 "num_base_bdevs_discovered": 2, 00:08:21.855 "num_base_bdevs_operational": 3, 00:08:21.855 "base_bdevs_list": [ 00:08:21.855 { 00:08:21.855 "name": "BaseBdev1", 00:08:21.855 "uuid": "d0656e28-0178-49b5-82e0-5034291a9a43", 00:08:21.855 "is_configured": true, 00:08:21.855 "data_offset": 2048, 00:08:21.855 "data_size": 63488 00:08:21.855 }, 00:08:21.855 { 00:08:21.855 "name": "BaseBdev2", 00:08:21.855 "uuid": "0130a497-5ccb-45e5-a066-97c20e81ef53", 00:08:21.855 "is_configured": true, 00:08:21.855 "data_offset": 2048, 00:08:21.855 "data_size": 63488 00:08:21.855 }, 00:08:21.855 { 00:08:21.855 "name": "BaseBdev3", 00:08:21.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.855 "is_configured": false, 00:08:21.855 "data_offset": 0, 00:08:21.855 "data_size": 0 00:08:21.855 } 00:08:21.855 ] 00:08:21.855 }' 00:08:21.855 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.855 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.114 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:22.114 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.114 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.114 [2024-12-06 23:42:33.622641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.114 [2024-12-06 23:42:33.623018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.114 [2024-12-06 23:42:33.623076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:22.115 [2024-12-06 23:42:33.623368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:22.115 [2024-12-06 23:42:33.623566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.115 BaseBdev3 00:08:22.115 [2024-12-06 23:42:33.623608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:22.115 [2024-12-06 23:42:33.623815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.115 [ 00:08:22.115 { 00:08:22.115 "name": "BaseBdev3", 00:08:22.115 "aliases": [ 00:08:22.115 "3db22aad-621c-4275-b467-e44451ee3b63" 00:08:22.115 ], 00:08:22.115 "product_name": "Malloc disk", 00:08:22.115 "block_size": 512, 00:08:22.115 "num_blocks": 65536, 00:08:22.115 "uuid": "3db22aad-621c-4275-b467-e44451ee3b63", 00:08:22.115 "assigned_rate_limits": { 00:08:22.115 "rw_ios_per_sec": 0, 00:08:22.115 "rw_mbytes_per_sec": 0, 00:08:22.115 "r_mbytes_per_sec": 0, 00:08:22.115 "w_mbytes_per_sec": 0 00:08:22.115 }, 00:08:22.115 "claimed": true, 00:08:22.115 "claim_type": "exclusive_write", 00:08:22.115 "zoned": false, 00:08:22.115 "supported_io_types": { 00:08:22.115 "read": true, 00:08:22.115 "write": true, 00:08:22.115 "unmap": true, 00:08:22.115 "flush": true, 00:08:22.115 "reset": true, 00:08:22.115 "nvme_admin": false, 00:08:22.115 "nvme_io": false, 00:08:22.115 "nvme_io_md": false, 00:08:22.115 "write_zeroes": true, 00:08:22.115 "zcopy": true, 00:08:22.115 "get_zone_info": false, 00:08:22.115 "zone_management": false, 00:08:22.115 "zone_append": false, 00:08:22.115 "compare": false, 00:08:22.115 "compare_and_write": false, 00:08:22.115 "abort": true, 00:08:22.115 "seek_hole": false, 00:08:22.115 "seek_data": false, 00:08:22.115 "copy": true, 00:08:22.115 "nvme_iov_md": false 00:08:22.115 }, 00:08:22.115 "memory_domains": [ 00:08:22.115 { 00:08:22.115 "dma_device_id": "system", 00:08:22.115 "dma_device_type": 1 00:08:22.115 }, 00:08:22.115 { 00:08:22.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.115 "dma_device_type": 2 00:08:22.115 } 00:08:22.115 ], 00:08:22.115 "driver_specific": {} 00:08:22.115 } 00:08:22.115 ] 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.115 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.374 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.374 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.374 "name": "Existed_Raid", 00:08:22.374 "uuid": "771a2ab5-9486-4b9b-978b-d75725cfdc17", 00:08:22.374 "strip_size_kb": 64, 00:08:22.374 "state": "online", 00:08:22.374 "raid_level": "raid0", 00:08:22.374 "superblock": true, 00:08:22.374 "num_base_bdevs": 3, 00:08:22.374 "num_base_bdevs_discovered": 3, 00:08:22.374 "num_base_bdevs_operational": 3, 00:08:22.374 "base_bdevs_list": [ 00:08:22.374 { 00:08:22.374 "name": "BaseBdev1", 00:08:22.374 "uuid": "d0656e28-0178-49b5-82e0-5034291a9a43", 00:08:22.374 "is_configured": true, 00:08:22.374 "data_offset": 2048, 00:08:22.374 "data_size": 63488 00:08:22.374 }, 00:08:22.374 { 00:08:22.374 "name": "BaseBdev2", 00:08:22.374 "uuid": "0130a497-5ccb-45e5-a066-97c20e81ef53", 00:08:22.374 "is_configured": true, 00:08:22.374 "data_offset": 2048, 00:08:22.374 "data_size": 63488 00:08:22.374 }, 00:08:22.374 { 00:08:22.374 "name": "BaseBdev3", 00:08:22.374 "uuid": "3db22aad-621c-4275-b467-e44451ee3b63", 00:08:22.374 "is_configured": true, 00:08:22.374 "data_offset": 2048, 00:08:22.374 "data_size": 63488 00:08:22.374 } 00:08:22.374 ] 00:08:22.374 }' 00:08:22.374 23:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.374 23:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.634 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:22.634 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:22.634 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.634 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.634 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.634 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.634 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:22.634 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.634 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.634 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.634 [2024-12-06 23:42:34.138139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.634 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.634 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.634 "name": "Existed_Raid", 00:08:22.634 "aliases": [ 00:08:22.634 "771a2ab5-9486-4b9b-978b-d75725cfdc17" 00:08:22.634 ], 00:08:22.634 "product_name": "Raid Volume", 00:08:22.634 "block_size": 512, 00:08:22.634 "num_blocks": 190464, 00:08:22.634 "uuid": "771a2ab5-9486-4b9b-978b-d75725cfdc17", 00:08:22.634 "assigned_rate_limits": { 00:08:22.634 "rw_ios_per_sec": 0, 00:08:22.634 "rw_mbytes_per_sec": 0, 00:08:22.634 "r_mbytes_per_sec": 0, 00:08:22.634 "w_mbytes_per_sec": 0 00:08:22.634 }, 00:08:22.634 "claimed": false, 00:08:22.634 "zoned": false, 00:08:22.634 "supported_io_types": { 00:08:22.634 "read": true, 00:08:22.634 "write": true, 00:08:22.634 "unmap": true, 00:08:22.634 "flush": true, 00:08:22.634 "reset": true, 00:08:22.634 "nvme_admin": false, 00:08:22.634 "nvme_io": false, 00:08:22.634 "nvme_io_md": false, 00:08:22.634 "write_zeroes": true, 00:08:22.634 "zcopy": false, 00:08:22.634 "get_zone_info": false, 00:08:22.634 "zone_management": false, 00:08:22.634 "zone_append": false, 00:08:22.634 "compare": false, 00:08:22.634 "compare_and_write": false, 00:08:22.634 "abort": false, 00:08:22.634 "seek_hole": false, 00:08:22.634 "seek_data": false, 00:08:22.634 "copy": false, 00:08:22.634 "nvme_iov_md": false 00:08:22.634 }, 00:08:22.634 "memory_domains": [ 00:08:22.634 { 00:08:22.634 "dma_device_id": "system", 00:08:22.634 "dma_device_type": 1 00:08:22.634 }, 00:08:22.634 { 00:08:22.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.634 "dma_device_type": 2 00:08:22.634 }, 00:08:22.634 { 00:08:22.634 "dma_device_id": "system", 00:08:22.634 "dma_device_type": 1 00:08:22.634 }, 00:08:22.634 { 00:08:22.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.634 "dma_device_type": 2 00:08:22.634 }, 00:08:22.634 { 00:08:22.634 "dma_device_id": "system", 00:08:22.634 "dma_device_type": 1 00:08:22.634 }, 00:08:22.634 { 00:08:22.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.634 "dma_device_type": 2 00:08:22.634 } 00:08:22.634 ], 00:08:22.634 "driver_specific": { 00:08:22.634 "raid": { 00:08:22.634 "uuid": "771a2ab5-9486-4b9b-978b-d75725cfdc17", 00:08:22.634 "strip_size_kb": 64, 00:08:22.634 "state": "online", 00:08:22.634 "raid_level": "raid0", 00:08:22.634 "superblock": true, 00:08:22.635 "num_base_bdevs": 3, 00:08:22.635 "num_base_bdevs_discovered": 3, 00:08:22.635 "num_base_bdevs_operational": 3, 00:08:22.635 "base_bdevs_list": [ 00:08:22.635 { 00:08:22.635 "name": "BaseBdev1", 00:08:22.635 "uuid": "d0656e28-0178-49b5-82e0-5034291a9a43", 00:08:22.635 "is_configured": true, 00:08:22.635 "data_offset": 2048, 00:08:22.635 "data_size": 63488 00:08:22.635 }, 00:08:22.635 { 00:08:22.635 "name": "BaseBdev2", 00:08:22.635 "uuid": "0130a497-5ccb-45e5-a066-97c20e81ef53", 00:08:22.635 "is_configured": true, 00:08:22.635 "data_offset": 2048, 00:08:22.635 "data_size": 63488 00:08:22.635 }, 00:08:22.635 { 00:08:22.635 "name": "BaseBdev3", 00:08:22.635 "uuid": "3db22aad-621c-4275-b467-e44451ee3b63", 00:08:22.635 "is_configured": true, 00:08:22.635 "data_offset": 2048, 00:08:22.635 "data_size": 63488 00:08:22.635 } 00:08:22.635 ] 00:08:22.635 } 00:08:22.635 } 00:08:22.635 }' 00:08:22.635 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:22.895 BaseBdev2 00:08:22.895 BaseBdev3' 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.895 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.895 [2024-12-06 23:42:34.413371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:22.895 [2024-12-06 23:42:34.413399] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.895 [2024-12-06 23:42:34.413450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.154 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.154 "name": "Existed_Raid", 00:08:23.154 "uuid": "771a2ab5-9486-4b9b-978b-d75725cfdc17", 00:08:23.154 "strip_size_kb": 64, 00:08:23.154 "state": "offline", 00:08:23.154 "raid_level": "raid0", 00:08:23.154 "superblock": true, 00:08:23.154 "num_base_bdevs": 3, 00:08:23.154 "num_base_bdevs_discovered": 2, 00:08:23.154 "num_base_bdevs_operational": 2, 00:08:23.154 "base_bdevs_list": [ 00:08:23.154 { 00:08:23.154 "name": null, 00:08:23.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.154 "is_configured": false, 00:08:23.154 "data_offset": 0, 00:08:23.154 "data_size": 63488 00:08:23.154 }, 00:08:23.154 { 00:08:23.154 "name": "BaseBdev2", 00:08:23.155 "uuid": "0130a497-5ccb-45e5-a066-97c20e81ef53", 00:08:23.155 "is_configured": true, 00:08:23.155 "data_offset": 2048, 00:08:23.155 "data_size": 63488 00:08:23.155 }, 00:08:23.155 { 00:08:23.155 "name": "BaseBdev3", 00:08:23.155 "uuid": "3db22aad-621c-4275-b467-e44451ee3b63", 00:08:23.155 "is_configured": true, 00:08:23.155 "data_offset": 2048, 00:08:23.155 "data_size": 63488 00:08:23.155 } 00:08:23.155 ] 00:08:23.155 }' 00:08:23.155 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.155 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.414 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:23.414 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.414 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.414 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:23.414 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.414 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.414 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.672 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:23.672 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:23.672 23:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:23.672 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.672 23:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.672 [2024-12-06 23:42:34.985947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.672 [2024-12-06 23:42:35.132286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:23.672 [2024-12-06 23:42:35.132338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.672 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.931 BaseBdev2 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.931 [ 00:08:23.931 { 00:08:23.931 "name": "BaseBdev2", 00:08:23.931 "aliases": [ 00:08:23.931 "51787c74-57e8-4d79-b832-5e4d9f40b2d1" 00:08:23.931 ], 00:08:23.931 "product_name": "Malloc disk", 00:08:23.931 "block_size": 512, 00:08:23.931 "num_blocks": 65536, 00:08:23.931 "uuid": "51787c74-57e8-4d79-b832-5e4d9f40b2d1", 00:08:23.931 "assigned_rate_limits": { 00:08:23.931 "rw_ios_per_sec": 0, 00:08:23.931 "rw_mbytes_per_sec": 0, 00:08:23.931 "r_mbytes_per_sec": 0, 00:08:23.931 "w_mbytes_per_sec": 0 00:08:23.931 }, 00:08:23.931 "claimed": false, 00:08:23.931 "zoned": false, 00:08:23.931 "supported_io_types": { 00:08:23.931 "read": true, 00:08:23.931 "write": true, 00:08:23.931 "unmap": true, 00:08:23.931 "flush": true, 00:08:23.931 "reset": true, 00:08:23.931 "nvme_admin": false, 00:08:23.931 "nvme_io": false, 00:08:23.931 "nvme_io_md": false, 00:08:23.931 "write_zeroes": true, 00:08:23.931 "zcopy": true, 00:08:23.931 "get_zone_info": false, 00:08:23.931 "zone_management": false, 00:08:23.931 "zone_append": false, 00:08:23.931 "compare": false, 00:08:23.931 "compare_and_write": false, 00:08:23.931 "abort": true, 00:08:23.931 "seek_hole": false, 00:08:23.931 "seek_data": false, 00:08:23.931 "copy": true, 00:08:23.931 "nvme_iov_md": false 00:08:23.931 }, 00:08:23.931 "memory_domains": [ 00:08:23.931 { 00:08:23.931 "dma_device_id": "system", 00:08:23.931 "dma_device_type": 1 00:08:23.931 }, 00:08:23.931 { 00:08:23.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.931 "dma_device_type": 2 00:08:23.931 } 00:08:23.931 ], 00:08:23.931 "driver_specific": {} 00:08:23.931 } 00:08:23.931 ] 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.931 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.932 BaseBdev3 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.932 [ 00:08:23.932 { 00:08:23.932 "name": "BaseBdev3", 00:08:23.932 "aliases": [ 00:08:23.932 "f86e0fab-2738-4093-bd39-ee10d066cd04" 00:08:23.932 ], 00:08:23.932 "product_name": "Malloc disk", 00:08:23.932 "block_size": 512, 00:08:23.932 "num_blocks": 65536, 00:08:23.932 "uuid": "f86e0fab-2738-4093-bd39-ee10d066cd04", 00:08:23.932 "assigned_rate_limits": { 00:08:23.932 "rw_ios_per_sec": 0, 00:08:23.932 "rw_mbytes_per_sec": 0, 00:08:23.932 "r_mbytes_per_sec": 0, 00:08:23.932 "w_mbytes_per_sec": 0 00:08:23.932 }, 00:08:23.932 "claimed": false, 00:08:23.932 "zoned": false, 00:08:23.932 "supported_io_types": { 00:08:23.932 "read": true, 00:08:23.932 "write": true, 00:08:23.932 "unmap": true, 00:08:23.932 "flush": true, 00:08:23.932 "reset": true, 00:08:23.932 "nvme_admin": false, 00:08:23.932 "nvme_io": false, 00:08:23.932 "nvme_io_md": false, 00:08:23.932 "write_zeroes": true, 00:08:23.932 "zcopy": true, 00:08:23.932 "get_zone_info": false, 00:08:23.932 "zone_management": false, 00:08:23.932 "zone_append": false, 00:08:23.932 "compare": false, 00:08:23.932 "compare_and_write": false, 00:08:23.932 "abort": true, 00:08:23.932 "seek_hole": false, 00:08:23.932 "seek_data": false, 00:08:23.932 "copy": true, 00:08:23.932 "nvme_iov_md": false 00:08:23.932 }, 00:08:23.932 "memory_domains": [ 00:08:23.932 { 00:08:23.932 "dma_device_id": "system", 00:08:23.932 "dma_device_type": 1 00:08:23.932 }, 00:08:23.932 { 00:08:23.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.932 "dma_device_type": 2 00:08:23.932 } 00:08:23.932 ], 00:08:23.932 "driver_specific": {} 00:08:23.932 } 00:08:23.932 ] 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.932 [2024-12-06 23:42:35.433573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.932 [2024-12-06 23:42:35.433689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.932 [2024-12-06 23:42:35.433736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.932 [2024-12-06 23:42:35.435544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.932 "name": "Existed_Raid", 00:08:23.932 "uuid": "be0a5921-f002-4834-b8af-24eec0594187", 00:08:23.932 "strip_size_kb": 64, 00:08:23.932 "state": "configuring", 00:08:23.932 "raid_level": "raid0", 00:08:23.932 "superblock": true, 00:08:23.932 "num_base_bdevs": 3, 00:08:23.932 "num_base_bdevs_discovered": 2, 00:08:23.932 "num_base_bdevs_operational": 3, 00:08:23.932 "base_bdevs_list": [ 00:08:23.932 { 00:08:23.932 "name": "BaseBdev1", 00:08:23.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.932 "is_configured": false, 00:08:23.932 "data_offset": 0, 00:08:23.932 "data_size": 0 00:08:23.932 }, 00:08:23.932 { 00:08:23.932 "name": "BaseBdev2", 00:08:23.932 "uuid": "51787c74-57e8-4d79-b832-5e4d9f40b2d1", 00:08:23.932 "is_configured": true, 00:08:23.932 "data_offset": 2048, 00:08:23.932 "data_size": 63488 00:08:23.932 }, 00:08:23.932 { 00:08:23.932 "name": "BaseBdev3", 00:08:23.932 "uuid": "f86e0fab-2738-4093-bd39-ee10d066cd04", 00:08:23.932 "is_configured": true, 00:08:23.932 "data_offset": 2048, 00:08:23.932 "data_size": 63488 00:08:23.932 } 00:08:23.932 ] 00:08:23.932 }' 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.932 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.501 [2024-12-06 23:42:35.876844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.501 "name": "Existed_Raid", 00:08:24.501 "uuid": "be0a5921-f002-4834-b8af-24eec0594187", 00:08:24.501 "strip_size_kb": 64, 00:08:24.501 "state": "configuring", 00:08:24.501 "raid_level": "raid0", 00:08:24.501 "superblock": true, 00:08:24.501 "num_base_bdevs": 3, 00:08:24.501 "num_base_bdevs_discovered": 1, 00:08:24.501 "num_base_bdevs_operational": 3, 00:08:24.501 "base_bdevs_list": [ 00:08:24.501 { 00:08:24.501 "name": "BaseBdev1", 00:08:24.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.501 "is_configured": false, 00:08:24.501 "data_offset": 0, 00:08:24.501 "data_size": 0 00:08:24.501 }, 00:08:24.501 { 00:08:24.501 "name": null, 00:08:24.501 "uuid": "51787c74-57e8-4d79-b832-5e4d9f40b2d1", 00:08:24.501 "is_configured": false, 00:08:24.501 "data_offset": 0, 00:08:24.501 "data_size": 63488 00:08:24.501 }, 00:08:24.501 { 00:08:24.501 "name": "BaseBdev3", 00:08:24.501 "uuid": "f86e0fab-2738-4093-bd39-ee10d066cd04", 00:08:24.501 "is_configured": true, 00:08:24.501 "data_offset": 2048, 00:08:24.501 "data_size": 63488 00:08:24.501 } 00:08:24.501 ] 00:08:24.501 }' 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.501 23:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.069 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.069 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.069 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.069 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:25.069 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.070 [2024-12-06 23:42:36.423651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.070 BaseBdev1 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.070 [ 00:08:25.070 { 00:08:25.070 "name": "BaseBdev1", 00:08:25.070 "aliases": [ 00:08:25.070 "7c8df065-18c5-431a-9ca7-b056c1b59ebc" 00:08:25.070 ], 00:08:25.070 "product_name": "Malloc disk", 00:08:25.070 "block_size": 512, 00:08:25.070 "num_blocks": 65536, 00:08:25.070 "uuid": "7c8df065-18c5-431a-9ca7-b056c1b59ebc", 00:08:25.070 "assigned_rate_limits": { 00:08:25.070 "rw_ios_per_sec": 0, 00:08:25.070 "rw_mbytes_per_sec": 0, 00:08:25.070 "r_mbytes_per_sec": 0, 00:08:25.070 "w_mbytes_per_sec": 0 00:08:25.070 }, 00:08:25.070 "claimed": true, 00:08:25.070 "claim_type": "exclusive_write", 00:08:25.070 "zoned": false, 00:08:25.070 "supported_io_types": { 00:08:25.070 "read": true, 00:08:25.070 "write": true, 00:08:25.070 "unmap": true, 00:08:25.070 "flush": true, 00:08:25.070 "reset": true, 00:08:25.070 "nvme_admin": false, 00:08:25.070 "nvme_io": false, 00:08:25.070 "nvme_io_md": false, 00:08:25.070 "write_zeroes": true, 00:08:25.070 "zcopy": true, 00:08:25.070 "get_zone_info": false, 00:08:25.070 "zone_management": false, 00:08:25.070 "zone_append": false, 00:08:25.070 "compare": false, 00:08:25.070 "compare_and_write": false, 00:08:25.070 "abort": true, 00:08:25.070 "seek_hole": false, 00:08:25.070 "seek_data": false, 00:08:25.070 "copy": true, 00:08:25.070 "nvme_iov_md": false 00:08:25.070 }, 00:08:25.070 "memory_domains": [ 00:08:25.070 { 00:08:25.070 "dma_device_id": "system", 00:08:25.070 "dma_device_type": 1 00:08:25.070 }, 00:08:25.070 { 00:08:25.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.070 "dma_device_type": 2 00:08:25.070 } 00:08:25.070 ], 00:08:25.070 "driver_specific": {} 00:08:25.070 } 00:08:25.070 ] 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.070 "name": "Existed_Raid", 00:08:25.070 "uuid": "be0a5921-f002-4834-b8af-24eec0594187", 00:08:25.070 "strip_size_kb": 64, 00:08:25.070 "state": "configuring", 00:08:25.070 "raid_level": "raid0", 00:08:25.070 "superblock": true, 00:08:25.070 "num_base_bdevs": 3, 00:08:25.070 "num_base_bdevs_discovered": 2, 00:08:25.070 "num_base_bdevs_operational": 3, 00:08:25.070 "base_bdevs_list": [ 00:08:25.070 { 00:08:25.070 "name": "BaseBdev1", 00:08:25.070 "uuid": "7c8df065-18c5-431a-9ca7-b056c1b59ebc", 00:08:25.070 "is_configured": true, 00:08:25.070 "data_offset": 2048, 00:08:25.070 "data_size": 63488 00:08:25.070 }, 00:08:25.070 { 00:08:25.070 "name": null, 00:08:25.070 "uuid": "51787c74-57e8-4d79-b832-5e4d9f40b2d1", 00:08:25.070 "is_configured": false, 00:08:25.070 "data_offset": 0, 00:08:25.070 "data_size": 63488 00:08:25.070 }, 00:08:25.070 { 00:08:25.070 "name": "BaseBdev3", 00:08:25.070 "uuid": "f86e0fab-2738-4093-bd39-ee10d066cd04", 00:08:25.070 "is_configured": true, 00:08:25.070 "data_offset": 2048, 00:08:25.070 "data_size": 63488 00:08:25.070 } 00:08:25.070 ] 00:08:25.070 }' 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.070 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.639 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.639 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:25.639 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.639 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.639 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.639 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.640 [2024-12-06 23:42:36.954803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.640 23:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.640 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.640 "name": "Existed_Raid", 00:08:25.640 "uuid": "be0a5921-f002-4834-b8af-24eec0594187", 00:08:25.640 "strip_size_kb": 64, 00:08:25.640 "state": "configuring", 00:08:25.640 "raid_level": "raid0", 00:08:25.640 "superblock": true, 00:08:25.640 "num_base_bdevs": 3, 00:08:25.640 "num_base_bdevs_discovered": 1, 00:08:25.640 "num_base_bdevs_operational": 3, 00:08:25.640 "base_bdevs_list": [ 00:08:25.640 { 00:08:25.640 "name": "BaseBdev1", 00:08:25.640 "uuid": "7c8df065-18c5-431a-9ca7-b056c1b59ebc", 00:08:25.640 "is_configured": true, 00:08:25.640 "data_offset": 2048, 00:08:25.640 "data_size": 63488 00:08:25.640 }, 00:08:25.640 { 00:08:25.640 "name": null, 00:08:25.640 "uuid": "51787c74-57e8-4d79-b832-5e4d9f40b2d1", 00:08:25.640 "is_configured": false, 00:08:25.640 "data_offset": 0, 00:08:25.640 "data_size": 63488 00:08:25.640 }, 00:08:25.640 { 00:08:25.640 "name": null, 00:08:25.640 "uuid": "f86e0fab-2738-4093-bd39-ee10d066cd04", 00:08:25.640 "is_configured": false, 00:08:25.640 "data_offset": 0, 00:08:25.640 "data_size": 63488 00:08:25.640 } 00:08:25.640 ] 00:08:25.640 }' 00:08:25.640 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.640 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.899 [2024-12-06 23:42:37.430839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.899 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.158 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.158 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.158 "name": "Existed_Raid", 00:08:26.158 "uuid": "be0a5921-f002-4834-b8af-24eec0594187", 00:08:26.158 "strip_size_kb": 64, 00:08:26.158 "state": "configuring", 00:08:26.158 "raid_level": "raid0", 00:08:26.158 "superblock": true, 00:08:26.158 "num_base_bdevs": 3, 00:08:26.158 "num_base_bdevs_discovered": 2, 00:08:26.158 "num_base_bdevs_operational": 3, 00:08:26.158 "base_bdevs_list": [ 00:08:26.158 { 00:08:26.158 "name": "BaseBdev1", 00:08:26.158 "uuid": "7c8df065-18c5-431a-9ca7-b056c1b59ebc", 00:08:26.158 "is_configured": true, 00:08:26.158 "data_offset": 2048, 00:08:26.158 "data_size": 63488 00:08:26.158 }, 00:08:26.158 { 00:08:26.158 "name": null, 00:08:26.158 "uuid": "51787c74-57e8-4d79-b832-5e4d9f40b2d1", 00:08:26.158 "is_configured": false, 00:08:26.158 "data_offset": 0, 00:08:26.158 "data_size": 63488 00:08:26.158 }, 00:08:26.158 { 00:08:26.158 "name": "BaseBdev3", 00:08:26.158 "uuid": "f86e0fab-2738-4093-bd39-ee10d066cd04", 00:08:26.158 "is_configured": true, 00:08:26.158 "data_offset": 2048, 00:08:26.158 "data_size": 63488 00:08:26.158 } 00:08:26.158 ] 00:08:26.158 }' 00:08:26.158 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.158 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.418 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.418 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.418 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:26.418 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.418 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.418 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:26.418 23:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:26.418 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.418 23:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.418 [2024-12-06 23:42:37.914854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.678 "name": "Existed_Raid", 00:08:26.678 "uuid": "be0a5921-f002-4834-b8af-24eec0594187", 00:08:26.678 "strip_size_kb": 64, 00:08:26.678 "state": "configuring", 00:08:26.678 "raid_level": "raid0", 00:08:26.678 "superblock": true, 00:08:26.678 "num_base_bdevs": 3, 00:08:26.678 "num_base_bdevs_discovered": 1, 00:08:26.678 "num_base_bdevs_operational": 3, 00:08:26.678 "base_bdevs_list": [ 00:08:26.678 { 00:08:26.678 "name": null, 00:08:26.678 "uuid": "7c8df065-18c5-431a-9ca7-b056c1b59ebc", 00:08:26.678 "is_configured": false, 00:08:26.678 "data_offset": 0, 00:08:26.678 "data_size": 63488 00:08:26.678 }, 00:08:26.678 { 00:08:26.678 "name": null, 00:08:26.678 "uuid": "51787c74-57e8-4d79-b832-5e4d9f40b2d1", 00:08:26.678 "is_configured": false, 00:08:26.678 "data_offset": 0, 00:08:26.678 "data_size": 63488 00:08:26.678 }, 00:08:26.678 { 00:08:26.678 "name": "BaseBdev3", 00:08:26.678 "uuid": "f86e0fab-2738-4093-bd39-ee10d066cd04", 00:08:26.678 "is_configured": true, 00:08:26.678 "data_offset": 2048, 00:08:26.678 "data_size": 63488 00:08:26.678 } 00:08:26.678 ] 00:08:26.678 }' 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.678 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.938 [2024-12-06 23:42:38.482819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.938 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.198 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.198 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.198 "name": "Existed_Raid", 00:08:27.198 "uuid": "be0a5921-f002-4834-b8af-24eec0594187", 00:08:27.198 "strip_size_kb": 64, 00:08:27.198 "state": "configuring", 00:08:27.198 "raid_level": "raid0", 00:08:27.198 "superblock": true, 00:08:27.198 "num_base_bdevs": 3, 00:08:27.198 "num_base_bdevs_discovered": 2, 00:08:27.198 "num_base_bdevs_operational": 3, 00:08:27.198 "base_bdevs_list": [ 00:08:27.198 { 00:08:27.198 "name": null, 00:08:27.198 "uuid": "7c8df065-18c5-431a-9ca7-b056c1b59ebc", 00:08:27.198 "is_configured": false, 00:08:27.198 "data_offset": 0, 00:08:27.198 "data_size": 63488 00:08:27.198 }, 00:08:27.198 { 00:08:27.198 "name": "BaseBdev2", 00:08:27.198 "uuid": "51787c74-57e8-4d79-b832-5e4d9f40b2d1", 00:08:27.198 "is_configured": true, 00:08:27.198 "data_offset": 2048, 00:08:27.198 "data_size": 63488 00:08:27.198 }, 00:08:27.198 { 00:08:27.198 "name": "BaseBdev3", 00:08:27.198 "uuid": "f86e0fab-2738-4093-bd39-ee10d066cd04", 00:08:27.198 "is_configured": true, 00:08:27.198 "data_offset": 2048, 00:08:27.198 "data_size": 63488 00:08:27.198 } 00:08:27.198 ] 00:08:27.198 }' 00:08:27.198 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.198 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.458 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:27.458 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.458 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.458 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.458 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.458 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:27.458 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.458 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.458 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.458 23:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:27.458 23:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7c8df065-18c5-431a-9ca7-b056c1b59ebc 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.718 [2024-12-06 23:42:39.072290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:27.718 [2024-12-06 23:42:39.072590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:27.718 [2024-12-06 23:42:39.072630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:27.718 [2024-12-06 23:42:39.072962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:27.718 [2024-12-06 23:42:39.073143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:27.718 [2024-12-06 23:42:39.073185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:27.718 NewBaseBdev 00:08:27.718 [2024-12-06 23:42:39.073363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.718 [ 00:08:27.718 { 00:08:27.718 "name": "NewBaseBdev", 00:08:27.718 "aliases": [ 00:08:27.718 "7c8df065-18c5-431a-9ca7-b056c1b59ebc" 00:08:27.718 ], 00:08:27.718 "product_name": "Malloc disk", 00:08:27.718 "block_size": 512, 00:08:27.718 "num_blocks": 65536, 00:08:27.718 "uuid": "7c8df065-18c5-431a-9ca7-b056c1b59ebc", 00:08:27.718 "assigned_rate_limits": { 00:08:27.718 "rw_ios_per_sec": 0, 00:08:27.718 "rw_mbytes_per_sec": 0, 00:08:27.718 "r_mbytes_per_sec": 0, 00:08:27.718 "w_mbytes_per_sec": 0 00:08:27.718 }, 00:08:27.718 "claimed": true, 00:08:27.718 "claim_type": "exclusive_write", 00:08:27.718 "zoned": false, 00:08:27.718 "supported_io_types": { 00:08:27.718 "read": true, 00:08:27.718 "write": true, 00:08:27.718 "unmap": true, 00:08:27.718 "flush": true, 00:08:27.718 "reset": true, 00:08:27.718 "nvme_admin": false, 00:08:27.718 "nvme_io": false, 00:08:27.718 "nvme_io_md": false, 00:08:27.718 "write_zeroes": true, 00:08:27.718 "zcopy": true, 00:08:27.718 "get_zone_info": false, 00:08:27.718 "zone_management": false, 00:08:27.718 "zone_append": false, 00:08:27.718 "compare": false, 00:08:27.718 "compare_and_write": false, 00:08:27.718 "abort": true, 00:08:27.718 "seek_hole": false, 00:08:27.718 "seek_data": false, 00:08:27.718 "copy": true, 00:08:27.718 "nvme_iov_md": false 00:08:27.718 }, 00:08:27.718 "memory_domains": [ 00:08:27.718 { 00:08:27.718 "dma_device_id": "system", 00:08:27.718 "dma_device_type": 1 00:08:27.718 }, 00:08:27.718 { 00:08:27.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.718 "dma_device_type": 2 00:08:27.718 } 00:08:27.718 ], 00:08:27.718 "driver_specific": {} 00:08:27.718 } 00:08:27.718 ] 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.718 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.719 "name": "Existed_Raid", 00:08:27.719 "uuid": "be0a5921-f002-4834-b8af-24eec0594187", 00:08:27.719 "strip_size_kb": 64, 00:08:27.719 "state": "online", 00:08:27.719 "raid_level": "raid0", 00:08:27.719 "superblock": true, 00:08:27.719 "num_base_bdevs": 3, 00:08:27.719 "num_base_bdevs_discovered": 3, 00:08:27.719 "num_base_bdevs_operational": 3, 00:08:27.719 "base_bdevs_list": [ 00:08:27.719 { 00:08:27.719 "name": "NewBaseBdev", 00:08:27.719 "uuid": "7c8df065-18c5-431a-9ca7-b056c1b59ebc", 00:08:27.719 "is_configured": true, 00:08:27.719 "data_offset": 2048, 00:08:27.719 "data_size": 63488 00:08:27.719 }, 00:08:27.719 { 00:08:27.719 "name": "BaseBdev2", 00:08:27.719 "uuid": "51787c74-57e8-4d79-b832-5e4d9f40b2d1", 00:08:27.719 "is_configured": true, 00:08:27.719 "data_offset": 2048, 00:08:27.719 "data_size": 63488 00:08:27.719 }, 00:08:27.719 { 00:08:27.719 "name": "BaseBdev3", 00:08:27.719 "uuid": "f86e0fab-2738-4093-bd39-ee10d066cd04", 00:08:27.719 "is_configured": true, 00:08:27.719 "data_offset": 2048, 00:08:27.719 "data_size": 63488 00:08:27.719 } 00:08:27.719 ] 00:08:27.719 }' 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.719 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.978 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:27.978 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:27.978 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.978 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.978 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.978 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.978 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:27.978 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.978 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.978 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.978 [2024-12-06 23:42:39.535911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.237 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.237 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.237 "name": "Existed_Raid", 00:08:28.237 "aliases": [ 00:08:28.237 "be0a5921-f002-4834-b8af-24eec0594187" 00:08:28.237 ], 00:08:28.237 "product_name": "Raid Volume", 00:08:28.237 "block_size": 512, 00:08:28.237 "num_blocks": 190464, 00:08:28.237 "uuid": "be0a5921-f002-4834-b8af-24eec0594187", 00:08:28.237 "assigned_rate_limits": { 00:08:28.237 "rw_ios_per_sec": 0, 00:08:28.237 "rw_mbytes_per_sec": 0, 00:08:28.237 "r_mbytes_per_sec": 0, 00:08:28.237 "w_mbytes_per_sec": 0 00:08:28.237 }, 00:08:28.237 "claimed": false, 00:08:28.237 "zoned": false, 00:08:28.237 "supported_io_types": { 00:08:28.237 "read": true, 00:08:28.237 "write": true, 00:08:28.237 "unmap": true, 00:08:28.237 "flush": true, 00:08:28.237 "reset": true, 00:08:28.237 "nvme_admin": false, 00:08:28.237 "nvme_io": false, 00:08:28.237 "nvme_io_md": false, 00:08:28.237 "write_zeroes": true, 00:08:28.237 "zcopy": false, 00:08:28.237 "get_zone_info": false, 00:08:28.237 "zone_management": false, 00:08:28.237 "zone_append": false, 00:08:28.237 "compare": false, 00:08:28.237 "compare_and_write": false, 00:08:28.237 "abort": false, 00:08:28.237 "seek_hole": false, 00:08:28.237 "seek_data": false, 00:08:28.237 "copy": false, 00:08:28.237 "nvme_iov_md": false 00:08:28.237 }, 00:08:28.237 "memory_domains": [ 00:08:28.237 { 00:08:28.237 "dma_device_id": "system", 00:08:28.237 "dma_device_type": 1 00:08:28.237 }, 00:08:28.237 { 00:08:28.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.237 "dma_device_type": 2 00:08:28.237 }, 00:08:28.238 { 00:08:28.238 "dma_device_id": "system", 00:08:28.238 "dma_device_type": 1 00:08:28.238 }, 00:08:28.238 { 00:08:28.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.238 "dma_device_type": 2 00:08:28.238 }, 00:08:28.238 { 00:08:28.238 "dma_device_id": "system", 00:08:28.238 "dma_device_type": 1 00:08:28.238 }, 00:08:28.238 { 00:08:28.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.238 "dma_device_type": 2 00:08:28.238 } 00:08:28.238 ], 00:08:28.238 "driver_specific": { 00:08:28.238 "raid": { 00:08:28.238 "uuid": "be0a5921-f002-4834-b8af-24eec0594187", 00:08:28.238 "strip_size_kb": 64, 00:08:28.238 "state": "online", 00:08:28.238 "raid_level": "raid0", 00:08:28.238 "superblock": true, 00:08:28.238 "num_base_bdevs": 3, 00:08:28.238 "num_base_bdevs_discovered": 3, 00:08:28.238 "num_base_bdevs_operational": 3, 00:08:28.238 "base_bdevs_list": [ 00:08:28.238 { 00:08:28.238 "name": "NewBaseBdev", 00:08:28.238 "uuid": "7c8df065-18c5-431a-9ca7-b056c1b59ebc", 00:08:28.238 "is_configured": true, 00:08:28.238 "data_offset": 2048, 00:08:28.238 "data_size": 63488 00:08:28.238 }, 00:08:28.238 { 00:08:28.238 "name": "BaseBdev2", 00:08:28.238 "uuid": "51787c74-57e8-4d79-b832-5e4d9f40b2d1", 00:08:28.238 "is_configured": true, 00:08:28.238 "data_offset": 2048, 00:08:28.238 "data_size": 63488 00:08:28.238 }, 00:08:28.238 { 00:08:28.238 "name": "BaseBdev3", 00:08:28.238 "uuid": "f86e0fab-2738-4093-bd39-ee10d066cd04", 00:08:28.238 "is_configured": true, 00:08:28.238 "data_offset": 2048, 00:08:28.238 "data_size": 63488 00:08:28.238 } 00:08:28.238 ] 00:08:28.238 } 00:08:28.238 } 00:08:28.238 }' 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:28.238 BaseBdev2 00:08:28.238 BaseBdev3' 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.238 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.498 [2024-12-06 23:42:39.811074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.498 [2024-12-06 23:42:39.811147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.498 [2024-12-06 23:42:39.811247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.498 [2024-12-06 23:42:39.811353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.498 [2024-12-06 23:42:39.811407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64358 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64358 ']' 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64358 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64358 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64358' 00:08:28.498 killing process with pid 64358 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64358 00:08:28.498 [2024-12-06 23:42:39.859531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.498 23:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64358 00:08:28.767 [2024-12-06 23:42:40.151616] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.707 ************************************ 00:08:29.708 END TEST raid_state_function_test_sb 00:08:29.708 ************************************ 00:08:29.708 23:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:29.708 00:08:29.708 real 0m10.488s 00:08:29.708 user 0m16.724s 00:08:29.708 sys 0m1.803s 00:08:29.708 23:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.708 23:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.967 23:42:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:29.967 23:42:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:29.967 23:42:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.967 23:42:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.967 ************************************ 00:08:29.967 START TEST raid_superblock_test 00:08:29.967 ************************************ 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64978 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64978 00:08:29.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64978 ']' 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.967 23:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.967 [2024-12-06 23:42:41.404114] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:08:29.967 [2024-12-06 23:42:41.404292] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64978 ] 00:08:30.227 [2024-12-06 23:42:41.577730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.227 [2024-12-06 23:42:41.690410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.486 [2024-12-06 23:42:41.889915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.486 [2024-12-06 23:42:41.890043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.747 malloc1 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.747 [2024-12-06 23:42:42.290405] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:30.747 [2024-12-06 23:42:42.290464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.747 [2024-12-06 23:42:42.290486] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:30.747 [2024-12-06 23:42:42.290495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.747 [2024-12-06 23:42:42.292605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.747 [2024-12-06 23:42:42.292641] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:30.747 pt1 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.747 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.007 malloc2 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.007 [2024-12-06 23:42:42.344898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:31.007 [2024-12-06 23:42:42.345002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.007 [2024-12-06 23:42:42.345044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:31.007 [2024-12-06 23:42:42.345073] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.007 [2024-12-06 23:42:42.347149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.007 [2024-12-06 23:42:42.347228] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:31.007 pt2 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.007 malloc3 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.007 [2024-12-06 23:42:42.416013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:31.007 [2024-12-06 23:42:42.416112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.007 [2024-12-06 23:42:42.416151] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:31.007 [2024-12-06 23:42:42.416180] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.007 [2024-12-06 23:42:42.418190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.007 [2024-12-06 23:42:42.418262] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:31.007 pt3 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.007 [2024-12-06 23:42:42.432053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:31.007 [2024-12-06 23:42:42.433880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:31.007 [2024-12-06 23:42:42.433949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:31.007 [2024-12-06 23:42:42.434121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:31.007 [2024-12-06 23:42:42.434135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:31.007 [2024-12-06 23:42:42.434402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:31.007 [2024-12-06 23:42:42.434577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:31.007 [2024-12-06 23:42:42.434586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:31.007 [2024-12-06 23:42:42.434786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.007 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.008 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.008 "name": "raid_bdev1", 00:08:31.008 "uuid": "8c57dd18-5723-4bfb-a65d-ca775c72f95c", 00:08:31.008 "strip_size_kb": 64, 00:08:31.008 "state": "online", 00:08:31.008 "raid_level": "raid0", 00:08:31.008 "superblock": true, 00:08:31.008 "num_base_bdevs": 3, 00:08:31.008 "num_base_bdevs_discovered": 3, 00:08:31.008 "num_base_bdevs_operational": 3, 00:08:31.008 "base_bdevs_list": [ 00:08:31.008 { 00:08:31.008 "name": "pt1", 00:08:31.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.008 "is_configured": true, 00:08:31.008 "data_offset": 2048, 00:08:31.008 "data_size": 63488 00:08:31.008 }, 00:08:31.008 { 00:08:31.008 "name": "pt2", 00:08:31.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.008 "is_configured": true, 00:08:31.008 "data_offset": 2048, 00:08:31.008 "data_size": 63488 00:08:31.008 }, 00:08:31.008 { 00:08:31.008 "name": "pt3", 00:08:31.008 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.008 "is_configured": true, 00:08:31.008 "data_offset": 2048, 00:08:31.008 "data_size": 63488 00:08:31.008 } 00:08:31.008 ] 00:08:31.008 }' 00:08:31.008 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.008 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.578 [2024-12-06 23:42:42.947429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.578 "name": "raid_bdev1", 00:08:31.578 "aliases": [ 00:08:31.578 "8c57dd18-5723-4bfb-a65d-ca775c72f95c" 00:08:31.578 ], 00:08:31.578 "product_name": "Raid Volume", 00:08:31.578 "block_size": 512, 00:08:31.578 "num_blocks": 190464, 00:08:31.578 "uuid": "8c57dd18-5723-4bfb-a65d-ca775c72f95c", 00:08:31.578 "assigned_rate_limits": { 00:08:31.578 "rw_ios_per_sec": 0, 00:08:31.578 "rw_mbytes_per_sec": 0, 00:08:31.578 "r_mbytes_per_sec": 0, 00:08:31.578 "w_mbytes_per_sec": 0 00:08:31.578 }, 00:08:31.578 "claimed": false, 00:08:31.578 "zoned": false, 00:08:31.578 "supported_io_types": { 00:08:31.578 "read": true, 00:08:31.578 "write": true, 00:08:31.578 "unmap": true, 00:08:31.578 "flush": true, 00:08:31.578 "reset": true, 00:08:31.578 "nvme_admin": false, 00:08:31.578 "nvme_io": false, 00:08:31.578 "nvme_io_md": false, 00:08:31.578 "write_zeroes": true, 00:08:31.578 "zcopy": false, 00:08:31.578 "get_zone_info": false, 00:08:31.578 "zone_management": false, 00:08:31.578 "zone_append": false, 00:08:31.578 "compare": false, 00:08:31.578 "compare_and_write": false, 00:08:31.578 "abort": false, 00:08:31.578 "seek_hole": false, 00:08:31.578 "seek_data": false, 00:08:31.578 "copy": false, 00:08:31.578 "nvme_iov_md": false 00:08:31.578 }, 00:08:31.578 "memory_domains": [ 00:08:31.578 { 00:08:31.578 "dma_device_id": "system", 00:08:31.578 "dma_device_type": 1 00:08:31.578 }, 00:08:31.578 { 00:08:31.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.578 "dma_device_type": 2 00:08:31.578 }, 00:08:31.578 { 00:08:31.578 "dma_device_id": "system", 00:08:31.578 "dma_device_type": 1 00:08:31.578 }, 00:08:31.578 { 00:08:31.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.578 "dma_device_type": 2 00:08:31.578 }, 00:08:31.578 { 00:08:31.578 "dma_device_id": "system", 00:08:31.578 "dma_device_type": 1 00:08:31.578 }, 00:08:31.578 { 00:08:31.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.578 "dma_device_type": 2 00:08:31.578 } 00:08:31.578 ], 00:08:31.578 "driver_specific": { 00:08:31.578 "raid": { 00:08:31.578 "uuid": "8c57dd18-5723-4bfb-a65d-ca775c72f95c", 00:08:31.578 "strip_size_kb": 64, 00:08:31.578 "state": "online", 00:08:31.578 "raid_level": "raid0", 00:08:31.578 "superblock": true, 00:08:31.578 "num_base_bdevs": 3, 00:08:31.578 "num_base_bdevs_discovered": 3, 00:08:31.578 "num_base_bdevs_operational": 3, 00:08:31.578 "base_bdevs_list": [ 00:08:31.578 { 00:08:31.578 "name": "pt1", 00:08:31.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.578 "is_configured": true, 00:08:31.578 "data_offset": 2048, 00:08:31.578 "data_size": 63488 00:08:31.578 }, 00:08:31.578 { 00:08:31.578 "name": "pt2", 00:08:31.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.578 "is_configured": true, 00:08:31.578 "data_offset": 2048, 00:08:31.578 "data_size": 63488 00:08:31.578 }, 00:08:31.578 { 00:08:31.578 "name": "pt3", 00:08:31.578 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.578 "is_configured": true, 00:08:31.578 "data_offset": 2048, 00:08:31.578 "data_size": 63488 00:08:31.578 } 00:08:31.578 ] 00:08:31.578 } 00:08:31.578 } 00:08:31.578 }' 00:08:31.578 23:42:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.578 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:31.578 pt2 00:08:31.578 pt3' 00:08:31.578 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.578 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.578 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.578 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:31.578 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.578 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.579 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.579 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.579 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.579 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.579 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.579 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.579 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:31.579 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.579 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.839 [2024-12-06 23:42:43.218973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8c57dd18-5723-4bfb-a65d-ca775c72f95c 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8c57dd18-5723-4bfb-a65d-ca775c72f95c ']' 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.839 [2024-12-06 23:42:43.262602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:31.839 [2024-12-06 23:42:43.262629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.839 [2024-12-06 23:42:43.262732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.839 [2024-12-06 23:42:43.262796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.839 [2024-12-06 23:42:43.262805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:31.839 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.840 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:31.840 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.840 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.100 [2024-12-06 23:42:43.398469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:32.100 [2024-12-06 23:42:43.400285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:32.100 [2024-12-06 23:42:43.400332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:32.100 [2024-12-06 23:42:43.400384] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:32.100 [2024-12-06 23:42:43.400435] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:32.100 [2024-12-06 23:42:43.400454] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:32.100 [2024-12-06 23:42:43.400470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:32.100 [2024-12-06 23:42:43.400481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:32.100 request: 00:08:32.100 { 00:08:32.100 "name": "raid_bdev1", 00:08:32.100 "raid_level": "raid0", 00:08:32.100 "base_bdevs": [ 00:08:32.100 "malloc1", 00:08:32.100 "malloc2", 00:08:32.100 "malloc3" 00:08:32.100 ], 00:08:32.100 "strip_size_kb": 64, 00:08:32.100 "superblock": false, 00:08:32.100 "method": "bdev_raid_create", 00:08:32.100 "req_id": 1 00:08:32.100 } 00:08:32.100 Got JSON-RPC error response 00:08:32.100 response: 00:08:32.100 { 00:08:32.100 "code": -17, 00:08:32.100 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:32.100 } 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.100 [2024-12-06 23:42:43.462259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:32.100 [2024-12-06 23:42:43.462350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.100 [2024-12-06 23:42:43.462403] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:32.100 [2024-12-06 23:42:43.462431] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.100 [2024-12-06 23:42:43.464623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.100 [2024-12-06 23:42:43.464728] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:32.100 [2024-12-06 23:42:43.464836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:32.100 [2024-12-06 23:42:43.464905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:32.100 pt1 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.100 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.100 "name": "raid_bdev1", 00:08:32.100 "uuid": "8c57dd18-5723-4bfb-a65d-ca775c72f95c", 00:08:32.100 "strip_size_kb": 64, 00:08:32.100 "state": "configuring", 00:08:32.100 "raid_level": "raid0", 00:08:32.100 "superblock": true, 00:08:32.100 "num_base_bdevs": 3, 00:08:32.100 "num_base_bdevs_discovered": 1, 00:08:32.100 "num_base_bdevs_operational": 3, 00:08:32.101 "base_bdevs_list": [ 00:08:32.101 { 00:08:32.101 "name": "pt1", 00:08:32.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.101 "is_configured": true, 00:08:32.101 "data_offset": 2048, 00:08:32.101 "data_size": 63488 00:08:32.101 }, 00:08:32.101 { 00:08:32.101 "name": null, 00:08:32.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.101 "is_configured": false, 00:08:32.101 "data_offset": 2048, 00:08:32.101 "data_size": 63488 00:08:32.101 }, 00:08:32.101 { 00:08:32.101 "name": null, 00:08:32.101 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.101 "is_configured": false, 00:08:32.101 "data_offset": 2048, 00:08:32.101 "data_size": 63488 00:08:32.101 } 00:08:32.101 ] 00:08:32.101 }' 00:08:32.101 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.101 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.360 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:32.360 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.360 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.360 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.619 [2024-12-06 23:42:43.921515] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.619 [2024-12-06 23:42:43.921590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.619 [2024-12-06 23:42:43.921617] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:32.619 [2024-12-06 23:42:43.921627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.619 [2024-12-06 23:42:43.922138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.619 [2024-12-06 23:42:43.922163] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.619 [2024-12-06 23:42:43.922252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:32.619 [2024-12-06 23:42:43.922282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.619 pt2 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.619 [2024-12-06 23:42:43.929493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.619 "name": "raid_bdev1", 00:08:32.619 "uuid": "8c57dd18-5723-4bfb-a65d-ca775c72f95c", 00:08:32.619 "strip_size_kb": 64, 00:08:32.619 "state": "configuring", 00:08:32.619 "raid_level": "raid0", 00:08:32.619 "superblock": true, 00:08:32.619 "num_base_bdevs": 3, 00:08:32.619 "num_base_bdevs_discovered": 1, 00:08:32.619 "num_base_bdevs_operational": 3, 00:08:32.619 "base_bdevs_list": [ 00:08:32.619 { 00:08:32.619 "name": "pt1", 00:08:32.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.619 "is_configured": true, 00:08:32.619 "data_offset": 2048, 00:08:32.619 "data_size": 63488 00:08:32.619 }, 00:08:32.619 { 00:08:32.619 "name": null, 00:08:32.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.619 "is_configured": false, 00:08:32.619 "data_offset": 0, 00:08:32.619 "data_size": 63488 00:08:32.619 }, 00:08:32.619 { 00:08:32.619 "name": null, 00:08:32.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.619 "is_configured": false, 00:08:32.619 "data_offset": 2048, 00:08:32.619 "data_size": 63488 00:08:32.619 } 00:08:32.619 ] 00:08:32.619 }' 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.619 23:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.878 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:32.878 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:32.878 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.878 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.878 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.878 [2024-12-06 23:42:44.376734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.878 [2024-12-06 23:42:44.376803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.878 [2024-12-06 23:42:44.376822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:32.878 [2024-12-06 23:42:44.376833] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.878 [2024-12-06 23:42:44.377298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.878 [2024-12-06 23:42:44.377332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.879 [2024-12-06 23:42:44.377420] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:32.879 [2024-12-06 23:42:44.377450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.879 pt2 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.879 [2024-12-06 23:42:44.384697] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:32.879 [2024-12-06 23:42:44.384741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.879 [2024-12-06 23:42:44.384754] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:32.879 [2024-12-06 23:42:44.384764] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.879 [2024-12-06 23:42:44.385112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.879 [2024-12-06 23:42:44.385160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:32.879 [2024-12-06 23:42:44.385218] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:32.879 [2024-12-06 23:42:44.385238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:32.879 [2024-12-06 23:42:44.385360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:32.879 [2024-12-06 23:42:44.385375] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:32.879 [2024-12-06 23:42:44.385619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:32.879 [2024-12-06 23:42:44.385799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:32.879 [2024-12-06 23:42:44.385808] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:32.879 [2024-12-06 23:42:44.385957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.879 pt3 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.879 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.138 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.138 "name": "raid_bdev1", 00:08:33.138 "uuid": "8c57dd18-5723-4bfb-a65d-ca775c72f95c", 00:08:33.138 "strip_size_kb": 64, 00:08:33.138 "state": "online", 00:08:33.138 "raid_level": "raid0", 00:08:33.138 "superblock": true, 00:08:33.138 "num_base_bdevs": 3, 00:08:33.138 "num_base_bdevs_discovered": 3, 00:08:33.138 "num_base_bdevs_operational": 3, 00:08:33.138 "base_bdevs_list": [ 00:08:33.138 { 00:08:33.138 "name": "pt1", 00:08:33.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.138 "is_configured": true, 00:08:33.138 "data_offset": 2048, 00:08:33.138 "data_size": 63488 00:08:33.138 }, 00:08:33.138 { 00:08:33.138 "name": "pt2", 00:08:33.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.138 "is_configured": true, 00:08:33.138 "data_offset": 2048, 00:08:33.138 "data_size": 63488 00:08:33.138 }, 00:08:33.138 { 00:08:33.138 "name": "pt3", 00:08:33.138 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.138 "is_configured": true, 00:08:33.138 "data_offset": 2048, 00:08:33.138 "data_size": 63488 00:08:33.138 } 00:08:33.138 ] 00:08:33.138 }' 00:08:33.138 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.138 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.398 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:33.398 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:33.398 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.398 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.398 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.398 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.398 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.398 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.398 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.398 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.398 [2024-12-06 23:42:44.824261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.398 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.398 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.398 "name": "raid_bdev1", 00:08:33.398 "aliases": [ 00:08:33.398 "8c57dd18-5723-4bfb-a65d-ca775c72f95c" 00:08:33.398 ], 00:08:33.398 "product_name": "Raid Volume", 00:08:33.398 "block_size": 512, 00:08:33.398 "num_blocks": 190464, 00:08:33.398 "uuid": "8c57dd18-5723-4bfb-a65d-ca775c72f95c", 00:08:33.398 "assigned_rate_limits": { 00:08:33.398 "rw_ios_per_sec": 0, 00:08:33.398 "rw_mbytes_per_sec": 0, 00:08:33.398 "r_mbytes_per_sec": 0, 00:08:33.398 "w_mbytes_per_sec": 0 00:08:33.398 }, 00:08:33.398 "claimed": false, 00:08:33.398 "zoned": false, 00:08:33.398 "supported_io_types": { 00:08:33.398 "read": true, 00:08:33.398 "write": true, 00:08:33.398 "unmap": true, 00:08:33.398 "flush": true, 00:08:33.398 "reset": true, 00:08:33.398 "nvme_admin": false, 00:08:33.398 "nvme_io": false, 00:08:33.398 "nvme_io_md": false, 00:08:33.398 "write_zeroes": true, 00:08:33.398 "zcopy": false, 00:08:33.398 "get_zone_info": false, 00:08:33.398 "zone_management": false, 00:08:33.398 "zone_append": false, 00:08:33.398 "compare": false, 00:08:33.398 "compare_and_write": false, 00:08:33.398 "abort": false, 00:08:33.398 "seek_hole": false, 00:08:33.398 "seek_data": false, 00:08:33.398 "copy": false, 00:08:33.398 "nvme_iov_md": false 00:08:33.398 }, 00:08:33.398 "memory_domains": [ 00:08:33.398 { 00:08:33.399 "dma_device_id": "system", 00:08:33.399 "dma_device_type": 1 00:08:33.399 }, 00:08:33.399 { 00:08:33.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.399 "dma_device_type": 2 00:08:33.399 }, 00:08:33.399 { 00:08:33.399 "dma_device_id": "system", 00:08:33.399 "dma_device_type": 1 00:08:33.399 }, 00:08:33.399 { 00:08:33.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.399 "dma_device_type": 2 00:08:33.399 }, 00:08:33.399 { 00:08:33.399 "dma_device_id": "system", 00:08:33.399 "dma_device_type": 1 00:08:33.399 }, 00:08:33.399 { 00:08:33.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.399 "dma_device_type": 2 00:08:33.399 } 00:08:33.399 ], 00:08:33.399 "driver_specific": { 00:08:33.399 "raid": { 00:08:33.399 "uuid": "8c57dd18-5723-4bfb-a65d-ca775c72f95c", 00:08:33.399 "strip_size_kb": 64, 00:08:33.399 "state": "online", 00:08:33.399 "raid_level": "raid0", 00:08:33.399 "superblock": true, 00:08:33.399 "num_base_bdevs": 3, 00:08:33.399 "num_base_bdevs_discovered": 3, 00:08:33.399 "num_base_bdevs_operational": 3, 00:08:33.399 "base_bdevs_list": [ 00:08:33.399 { 00:08:33.399 "name": "pt1", 00:08:33.399 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.399 "is_configured": true, 00:08:33.399 "data_offset": 2048, 00:08:33.399 "data_size": 63488 00:08:33.399 }, 00:08:33.399 { 00:08:33.399 "name": "pt2", 00:08:33.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.399 "is_configured": true, 00:08:33.399 "data_offset": 2048, 00:08:33.399 "data_size": 63488 00:08:33.399 }, 00:08:33.399 { 00:08:33.399 "name": "pt3", 00:08:33.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.399 "is_configured": true, 00:08:33.399 "data_offset": 2048, 00:08:33.399 "data_size": 63488 00:08:33.399 } 00:08:33.399 ] 00:08:33.399 } 00:08:33.399 } 00:08:33.399 }' 00:08:33.399 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.399 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:33.399 pt2 00:08:33.399 pt3' 00:08:33.399 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.659 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.659 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.659 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:33.659 23:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.659 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.659 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.659 23:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:33.659 [2024-12-06 23:42:45.099795] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8c57dd18-5723-4bfb-a65d-ca775c72f95c '!=' 8c57dd18-5723-4bfb-a65d-ca775c72f95c ']' 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64978 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64978 ']' 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64978 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64978 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64978' 00:08:33.659 killing process with pid 64978 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64978 00:08:33.659 [2024-12-06 23:42:45.187672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.659 [2024-12-06 23:42:45.187850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.659 23:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64978 00:08:33.660 [2024-12-06 23:42:45.187964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.660 [2024-12-06 23:42:45.187979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:33.921 [2024-12-06 23:42:45.479503] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.312 23:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:35.312 00:08:35.312 real 0m5.231s 00:08:35.312 user 0m7.588s 00:08:35.312 sys 0m0.864s 00:08:35.312 ************************************ 00:08:35.312 END TEST raid_superblock_test 00:08:35.312 ************************************ 00:08:35.312 23:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.312 23:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 23:42:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:35.312 23:42:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:35.312 23:42:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.312 23:42:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 ************************************ 00:08:35.312 START TEST raid_read_error_test 00:08:35.312 ************************************ 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9rYcreCZvJ 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65237 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65237 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65237 ']' 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.312 23:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 [2024-12-06 23:42:46.717268] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:08:35.312 [2024-12-06 23:42:46.717384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65237 ] 00:08:35.572 [2024-12-06 23:42:46.891219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.572 [2024-12-06 23:42:46.995252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.832 [2024-12-06 23:42:47.180769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.832 [2024-12-06 23:42:47.180837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.092 BaseBdev1_malloc 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.092 true 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.092 [2024-12-06 23:42:47.597585] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:36.092 [2024-12-06 23:42:47.597644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.092 [2024-12-06 23:42:47.597677] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:36.092 [2024-12-06 23:42:47.597691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.092 [2024-12-06 23:42:47.600025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.092 [2024-12-06 23:42:47.600064] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:36.092 BaseBdev1 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.092 BaseBdev2_malloc 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.092 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.352 true 00:08:36.352 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.352 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:36.352 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.352 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.352 [2024-12-06 23:42:47.665913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:36.352 [2024-12-06 23:42:47.665964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.352 [2024-12-06 23:42:47.665980] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:36.352 [2024-12-06 23:42:47.665991] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.352 [2024-12-06 23:42:47.668048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.352 [2024-12-06 23:42:47.668085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:36.352 BaseBdev2 00:08:36.352 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.353 BaseBdev3_malloc 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.353 true 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.353 [2024-12-06 23:42:47.741470] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:36.353 [2024-12-06 23:42:47.741525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.353 [2024-12-06 23:42:47.741544] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:36.353 [2024-12-06 23:42:47.741554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.353 [2024-12-06 23:42:47.743690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.353 [2024-12-06 23:42:47.743725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:36.353 BaseBdev3 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.353 [2024-12-06 23:42:47.753517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.353 [2024-12-06 23:42:47.755342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.353 [2024-12-06 23:42:47.755417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.353 [2024-12-06 23:42:47.755610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:36.353 [2024-12-06 23:42:47.755631] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:36.353 [2024-12-06 23:42:47.755912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:36.353 [2024-12-06 23:42:47.756077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:36.353 [2024-12-06 23:42:47.756098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:36.353 [2024-12-06 23:42:47.756263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.353 "name": "raid_bdev1", 00:08:36.353 "uuid": "72eb7bc2-8437-4262-accb-d71bfd682aa3", 00:08:36.353 "strip_size_kb": 64, 00:08:36.353 "state": "online", 00:08:36.353 "raid_level": "raid0", 00:08:36.353 "superblock": true, 00:08:36.353 "num_base_bdevs": 3, 00:08:36.353 "num_base_bdevs_discovered": 3, 00:08:36.353 "num_base_bdevs_operational": 3, 00:08:36.353 "base_bdevs_list": [ 00:08:36.353 { 00:08:36.353 "name": "BaseBdev1", 00:08:36.353 "uuid": "94f2696d-abd1-588f-b49c-e9e28554f159", 00:08:36.353 "is_configured": true, 00:08:36.353 "data_offset": 2048, 00:08:36.353 "data_size": 63488 00:08:36.353 }, 00:08:36.353 { 00:08:36.353 "name": "BaseBdev2", 00:08:36.353 "uuid": "bbb9ebe0-fc56-514d-ae81-36df3ce9a5dc", 00:08:36.353 "is_configured": true, 00:08:36.353 "data_offset": 2048, 00:08:36.353 "data_size": 63488 00:08:36.353 }, 00:08:36.353 { 00:08:36.353 "name": "BaseBdev3", 00:08:36.353 "uuid": "6d98460f-2249-596b-917e-d840773cc7d3", 00:08:36.353 "is_configured": true, 00:08:36.353 "data_offset": 2048, 00:08:36.353 "data_size": 63488 00:08:36.353 } 00:08:36.353 ] 00:08:36.353 }' 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.353 23:42:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.923 23:42:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:36.923 23:42:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:36.923 [2024-12-06 23:42:48.281842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.863 "name": "raid_bdev1", 00:08:37.863 "uuid": "72eb7bc2-8437-4262-accb-d71bfd682aa3", 00:08:37.863 "strip_size_kb": 64, 00:08:37.863 "state": "online", 00:08:37.863 "raid_level": "raid0", 00:08:37.863 "superblock": true, 00:08:37.863 "num_base_bdevs": 3, 00:08:37.863 "num_base_bdevs_discovered": 3, 00:08:37.863 "num_base_bdevs_operational": 3, 00:08:37.863 "base_bdevs_list": [ 00:08:37.863 { 00:08:37.863 "name": "BaseBdev1", 00:08:37.863 "uuid": "94f2696d-abd1-588f-b49c-e9e28554f159", 00:08:37.863 "is_configured": true, 00:08:37.863 "data_offset": 2048, 00:08:37.863 "data_size": 63488 00:08:37.863 }, 00:08:37.863 { 00:08:37.863 "name": "BaseBdev2", 00:08:37.863 "uuid": "bbb9ebe0-fc56-514d-ae81-36df3ce9a5dc", 00:08:37.863 "is_configured": true, 00:08:37.863 "data_offset": 2048, 00:08:37.863 "data_size": 63488 00:08:37.863 }, 00:08:37.863 { 00:08:37.863 "name": "BaseBdev3", 00:08:37.863 "uuid": "6d98460f-2249-596b-917e-d840773cc7d3", 00:08:37.863 "is_configured": true, 00:08:37.863 "data_offset": 2048, 00:08:37.863 "data_size": 63488 00:08:37.863 } 00:08:37.863 ] 00:08:37.863 }' 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.863 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.123 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:38.123 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.123 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.123 [2024-12-06 23:42:49.652186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.123 [2024-12-06 23:42:49.652218] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.123 [2024-12-06 23:42:49.654966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.123 [2024-12-06 23:42:49.655014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.123 [2024-12-06 23:42:49.655050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.123 [2024-12-06 23:42:49.655059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:38.123 { 00:08:38.123 "results": [ 00:08:38.123 { 00:08:38.123 "job": "raid_bdev1", 00:08:38.123 "core_mask": "0x1", 00:08:38.123 "workload": "randrw", 00:08:38.123 "percentage": 50, 00:08:38.123 "status": "finished", 00:08:38.123 "queue_depth": 1, 00:08:38.123 "io_size": 131072, 00:08:38.123 "runtime": 1.371261, 00:08:38.123 "iops": 15862.771565734021, 00:08:38.123 "mibps": 1982.8464457167527, 00:08:38.123 "io_failed": 1, 00:08:38.123 "io_timeout": 0, 00:08:38.123 "avg_latency_us": 87.48889703914753, 00:08:38.123 "min_latency_us": 25.152838427947597, 00:08:38.123 "max_latency_us": 1459.5353711790392 00:08:38.123 } 00:08:38.123 ], 00:08:38.123 "core_count": 1 00:08:38.123 } 00:08:38.123 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.123 23:42:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65237 00:08:38.123 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65237 ']' 00:08:38.123 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65237 00:08:38.123 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:38.123 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.123 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65237 00:08:38.383 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.383 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.383 killing process with pid 65237 00:08:38.383 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65237' 00:08:38.383 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65237 00:08:38.383 [2024-12-06 23:42:49.698478] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.383 23:42:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65237 00:08:38.643 [2024-12-06 23:42:49.971819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.024 23:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:40.024 23:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9rYcreCZvJ 00:08:40.024 23:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:40.024 23:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:40.024 23:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:40.025 23:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.025 23:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:40.025 23:42:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:40.025 00:08:40.025 real 0m4.661s 00:08:40.025 user 0m5.479s 00:08:40.025 sys 0m0.570s 00:08:40.025 23:42:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.025 23:42:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.025 ************************************ 00:08:40.025 END TEST raid_read_error_test 00:08:40.025 ************************************ 00:08:40.025 23:42:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:40.025 23:42:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:40.025 23:42:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.025 23:42:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.025 ************************************ 00:08:40.025 START TEST raid_write_error_test 00:08:40.025 ************************************ 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YKQQWusBj9 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65377 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65377 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65377 ']' 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.025 23:42:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.025 [2024-12-06 23:42:51.460766] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:08:40.025 [2024-12-06 23:42:51.460888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65377 ] 00:08:40.285 [2024-12-06 23:42:51.640574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.285 [2024-12-06 23:42:51.783931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.544 [2024-12-06 23:42:52.030595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.545 [2024-12-06 23:42:52.030677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.804 BaseBdev1_malloc 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.804 true 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.804 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.804 [2024-12-06 23:42:52.327986] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:40.804 [2024-12-06 23:42:52.328053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.804 [2024-12-06 23:42:52.328077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:40.804 [2024-12-06 23:42:52.328089] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.804 [2024-12-06 23:42:52.330534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.804 [2024-12-06 23:42:52.330573] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:40.804 BaseBdev1 00:08:40.805 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.805 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.805 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:40.805 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.805 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.065 BaseBdev2_malloc 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.065 true 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.065 [2024-12-06 23:42:52.401502] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:41.065 [2024-12-06 23:42:52.401561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.065 [2024-12-06 23:42:52.401579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:41.065 [2024-12-06 23:42:52.401591] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.065 [2024-12-06 23:42:52.403983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.065 [2024-12-06 23:42:52.404018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:41.065 BaseBdev2 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.065 BaseBdev3_malloc 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.065 true 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.065 [2024-12-06 23:42:52.486623] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:41.065 [2024-12-06 23:42:52.486701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.065 [2024-12-06 23:42:52.486720] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:41.065 [2024-12-06 23:42:52.486747] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.065 [2024-12-06 23:42:52.489210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.065 [2024-12-06 23:42:52.489246] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:41.065 BaseBdev3 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.065 [2024-12-06 23:42:52.498714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.065 [2024-12-06 23:42:52.500938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.065 [2024-12-06 23:42:52.501019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.065 [2024-12-06 23:42:52.501233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:41.065 [2024-12-06 23:42:52.501255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:41.065 [2024-12-06 23:42:52.501553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:41.065 [2024-12-06 23:42:52.501755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:41.065 [2024-12-06 23:42:52.501778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:41.065 [2024-12-06 23:42:52.501964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.065 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.066 "name": "raid_bdev1", 00:08:41.066 "uuid": "f07d9174-57d1-4b44-acaf-07c75f41329c", 00:08:41.066 "strip_size_kb": 64, 00:08:41.066 "state": "online", 00:08:41.066 "raid_level": "raid0", 00:08:41.066 "superblock": true, 00:08:41.066 "num_base_bdevs": 3, 00:08:41.066 "num_base_bdevs_discovered": 3, 00:08:41.066 "num_base_bdevs_operational": 3, 00:08:41.066 "base_bdevs_list": [ 00:08:41.066 { 00:08:41.066 "name": "BaseBdev1", 00:08:41.066 "uuid": "4bc33f16-c1d3-5ca1-81a2-60ef35eeff53", 00:08:41.066 "is_configured": true, 00:08:41.066 "data_offset": 2048, 00:08:41.066 "data_size": 63488 00:08:41.066 }, 00:08:41.066 { 00:08:41.066 "name": "BaseBdev2", 00:08:41.066 "uuid": "35466288-a464-52bd-a266-4dc45bbfaff5", 00:08:41.066 "is_configured": true, 00:08:41.066 "data_offset": 2048, 00:08:41.066 "data_size": 63488 00:08:41.066 }, 00:08:41.066 { 00:08:41.066 "name": "BaseBdev3", 00:08:41.066 "uuid": "acb55756-a7ff-51f0-ac0c-9e91109a1756", 00:08:41.066 "is_configured": true, 00:08:41.066 "data_offset": 2048, 00:08:41.066 "data_size": 63488 00:08:41.066 } 00:08:41.066 ] 00:08:41.066 }' 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.066 23:42:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.635 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:41.635 23:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:41.635 [2024-12-06 23:42:53.043173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.588 23:42:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.588 23:42:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.588 "name": "raid_bdev1", 00:08:42.588 "uuid": "f07d9174-57d1-4b44-acaf-07c75f41329c", 00:08:42.588 "strip_size_kb": 64, 00:08:42.588 "state": "online", 00:08:42.588 "raid_level": "raid0", 00:08:42.588 "superblock": true, 00:08:42.588 "num_base_bdevs": 3, 00:08:42.588 "num_base_bdevs_discovered": 3, 00:08:42.588 "num_base_bdevs_operational": 3, 00:08:42.588 "base_bdevs_list": [ 00:08:42.588 { 00:08:42.588 "name": "BaseBdev1", 00:08:42.588 "uuid": "4bc33f16-c1d3-5ca1-81a2-60ef35eeff53", 00:08:42.588 "is_configured": true, 00:08:42.588 "data_offset": 2048, 00:08:42.588 "data_size": 63488 00:08:42.588 }, 00:08:42.588 { 00:08:42.588 "name": "BaseBdev2", 00:08:42.588 "uuid": "35466288-a464-52bd-a266-4dc45bbfaff5", 00:08:42.588 "is_configured": true, 00:08:42.588 "data_offset": 2048, 00:08:42.588 "data_size": 63488 00:08:42.588 }, 00:08:42.588 { 00:08:42.588 "name": "BaseBdev3", 00:08:42.588 "uuid": "acb55756-a7ff-51f0-ac0c-9e91109a1756", 00:08:42.588 "is_configured": true, 00:08:42.588 "data_offset": 2048, 00:08:42.588 "data_size": 63488 00:08:42.589 } 00:08:42.589 ] 00:08:42.589 }' 00:08:42.589 23:42:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.589 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.157 23:42:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:43.157 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.158 [2024-12-06 23:42:54.428016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.158 [2024-12-06 23:42:54.428068] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.158 [2024-12-06 23:42:54.430948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.158 [2024-12-06 23:42:54.431002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.158 [2024-12-06 23:42:54.431046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.158 [2024-12-06 23:42:54.431057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:43.158 { 00:08:43.158 "results": [ 00:08:43.158 { 00:08:43.158 "job": "raid_bdev1", 00:08:43.158 "core_mask": "0x1", 00:08:43.158 "workload": "randrw", 00:08:43.158 "percentage": 50, 00:08:43.158 "status": "finished", 00:08:43.158 "queue_depth": 1, 00:08:43.158 "io_size": 131072, 00:08:43.158 "runtime": 1.385581, 00:08:43.158 "iops": 13353.964871054091, 00:08:43.158 "mibps": 1669.2456088817614, 00:08:43.158 "io_failed": 1, 00:08:43.158 "io_timeout": 0, 00:08:43.158 "avg_latency_us": 105.24489453006267, 00:08:43.158 "min_latency_us": 26.382532751091702, 00:08:43.158 "max_latency_us": 1423.7624454148472 00:08:43.158 } 00:08:43.158 ], 00:08:43.158 "core_count": 1 00:08:43.158 } 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65377 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65377 ']' 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65377 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65377 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65377' 00:08:43.158 killing process with pid 65377 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65377 00:08:43.158 [2024-12-06 23:42:54.470404] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:43.158 23:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65377 00:08:43.417 [2024-12-06 23:42:54.730873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.795 23:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YKQQWusBj9 00:08:44.795 23:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:44.795 23:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:44.795 23:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:44.795 23:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:44.795 23:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.795 23:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:44.795 23:42:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:44.795 00:08:44.795 real 0m4.714s 00:08:44.795 user 0m5.471s 00:08:44.795 sys 0m0.643s 00:08:44.795 23:42:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.795 23:42:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.795 ************************************ 00:08:44.795 END TEST raid_write_error_test 00:08:44.795 ************************************ 00:08:44.795 23:42:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:44.795 23:42:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:44.795 23:42:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:44.795 23:42:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.795 23:42:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.795 ************************************ 00:08:44.795 START TEST raid_state_function_test 00:08:44.795 ************************************ 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65521 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65521' 00:08:44.795 Process raid pid: 65521 00:08:44.795 23:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65521 00:08:44.796 23:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65521 ']' 00:08:44.796 23:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.796 23:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.796 23:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.796 23:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.796 23:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.796 [2024-12-06 23:42:56.230460] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:08:44.796 [2024-12-06 23:42:56.230565] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.055 [2024-12-06 23:42:56.404366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.055 [2024-12-06 23:42:56.546299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.315 [2024-12-06 23:42:56.784485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.315 [2024-12-06 23:42:56.784538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.573 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.573 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:45.573 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.573 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.573 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.573 [2024-12-06 23:42:57.055306] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.573 [2024-12-06 23:42:57.055375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.573 [2024-12-06 23:42:57.055386] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.573 [2024-12-06 23:42:57.055397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.573 [2024-12-06 23:42:57.055403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.573 [2024-12-06 23:42:57.055413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.573 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.574 "name": "Existed_Raid", 00:08:45.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.574 "strip_size_kb": 64, 00:08:45.574 "state": "configuring", 00:08:45.574 "raid_level": "concat", 00:08:45.574 "superblock": false, 00:08:45.574 "num_base_bdevs": 3, 00:08:45.574 "num_base_bdevs_discovered": 0, 00:08:45.574 "num_base_bdevs_operational": 3, 00:08:45.574 "base_bdevs_list": [ 00:08:45.574 { 00:08:45.574 "name": "BaseBdev1", 00:08:45.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.574 "is_configured": false, 00:08:45.574 "data_offset": 0, 00:08:45.574 "data_size": 0 00:08:45.574 }, 00:08:45.574 { 00:08:45.574 "name": "BaseBdev2", 00:08:45.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.574 "is_configured": false, 00:08:45.574 "data_offset": 0, 00:08:45.574 "data_size": 0 00:08:45.574 }, 00:08:45.574 { 00:08:45.574 "name": "BaseBdev3", 00:08:45.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.574 "is_configured": false, 00:08:45.574 "data_offset": 0, 00:08:45.574 "data_size": 0 00:08:45.574 } 00:08:45.574 ] 00:08:45.574 }' 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.574 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.141 [2024-12-06 23:42:57.454684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.141 [2024-12-06 23:42:57.454743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.141 [2024-12-06 23:42:57.466625] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.141 [2024-12-06 23:42:57.466684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.141 [2024-12-06 23:42:57.466694] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.141 [2024-12-06 23:42:57.466704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.141 [2024-12-06 23:42:57.466717] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.141 [2024-12-06 23:42:57.466727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.141 [2024-12-06 23:42:57.521705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.141 BaseBdev1 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.141 [ 00:08:46.141 { 00:08:46.141 "name": "BaseBdev1", 00:08:46.141 "aliases": [ 00:08:46.141 "cc63b054-6452-4c9d-bf3d-d7bebe6c04b5" 00:08:46.141 ], 00:08:46.141 "product_name": "Malloc disk", 00:08:46.141 "block_size": 512, 00:08:46.141 "num_blocks": 65536, 00:08:46.141 "uuid": "cc63b054-6452-4c9d-bf3d-d7bebe6c04b5", 00:08:46.141 "assigned_rate_limits": { 00:08:46.141 "rw_ios_per_sec": 0, 00:08:46.141 "rw_mbytes_per_sec": 0, 00:08:46.141 "r_mbytes_per_sec": 0, 00:08:46.141 "w_mbytes_per_sec": 0 00:08:46.141 }, 00:08:46.141 "claimed": true, 00:08:46.141 "claim_type": "exclusive_write", 00:08:46.141 "zoned": false, 00:08:46.141 "supported_io_types": { 00:08:46.141 "read": true, 00:08:46.141 "write": true, 00:08:46.141 "unmap": true, 00:08:46.141 "flush": true, 00:08:46.141 "reset": true, 00:08:46.141 "nvme_admin": false, 00:08:46.141 "nvme_io": false, 00:08:46.141 "nvme_io_md": false, 00:08:46.141 "write_zeroes": true, 00:08:46.141 "zcopy": true, 00:08:46.141 "get_zone_info": false, 00:08:46.141 "zone_management": false, 00:08:46.141 "zone_append": false, 00:08:46.141 "compare": false, 00:08:46.141 "compare_and_write": false, 00:08:46.141 "abort": true, 00:08:46.141 "seek_hole": false, 00:08:46.141 "seek_data": false, 00:08:46.141 "copy": true, 00:08:46.141 "nvme_iov_md": false 00:08:46.141 }, 00:08:46.141 "memory_domains": [ 00:08:46.141 { 00:08:46.141 "dma_device_id": "system", 00:08:46.141 "dma_device_type": 1 00:08:46.141 }, 00:08:46.141 { 00:08:46.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.141 "dma_device_type": 2 00:08:46.141 } 00:08:46.141 ], 00:08:46.141 "driver_specific": {} 00:08:46.141 } 00:08:46.141 ] 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.141 "name": "Existed_Raid", 00:08:46.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.141 "strip_size_kb": 64, 00:08:46.141 "state": "configuring", 00:08:46.141 "raid_level": "concat", 00:08:46.141 "superblock": false, 00:08:46.141 "num_base_bdevs": 3, 00:08:46.141 "num_base_bdevs_discovered": 1, 00:08:46.141 "num_base_bdevs_operational": 3, 00:08:46.141 "base_bdevs_list": [ 00:08:46.141 { 00:08:46.141 "name": "BaseBdev1", 00:08:46.141 "uuid": "cc63b054-6452-4c9d-bf3d-d7bebe6c04b5", 00:08:46.141 "is_configured": true, 00:08:46.141 "data_offset": 0, 00:08:46.141 "data_size": 65536 00:08:46.141 }, 00:08:46.141 { 00:08:46.141 "name": "BaseBdev2", 00:08:46.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.141 "is_configured": false, 00:08:46.141 "data_offset": 0, 00:08:46.141 "data_size": 0 00:08:46.141 }, 00:08:46.141 { 00:08:46.141 "name": "BaseBdev3", 00:08:46.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.141 "is_configured": false, 00:08:46.141 "data_offset": 0, 00:08:46.141 "data_size": 0 00:08:46.141 } 00:08:46.141 ] 00:08:46.141 }' 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.141 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.709 23:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.709 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.710 23:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.710 [2024-12-06 23:42:57.996945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.710 [2024-12-06 23:42:57.997022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.710 [2024-12-06 23:42:58.004944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.710 [2024-12-06 23:42:58.007166] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.710 [2024-12-06 23:42:58.007216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.710 [2024-12-06 23:42:58.007227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.710 [2024-12-06 23:42:58.007237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.710 "name": "Existed_Raid", 00:08:46.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.710 "strip_size_kb": 64, 00:08:46.710 "state": "configuring", 00:08:46.710 "raid_level": "concat", 00:08:46.710 "superblock": false, 00:08:46.710 "num_base_bdevs": 3, 00:08:46.710 "num_base_bdevs_discovered": 1, 00:08:46.710 "num_base_bdevs_operational": 3, 00:08:46.710 "base_bdevs_list": [ 00:08:46.710 { 00:08:46.710 "name": "BaseBdev1", 00:08:46.710 "uuid": "cc63b054-6452-4c9d-bf3d-d7bebe6c04b5", 00:08:46.710 "is_configured": true, 00:08:46.710 "data_offset": 0, 00:08:46.710 "data_size": 65536 00:08:46.710 }, 00:08:46.710 { 00:08:46.710 "name": "BaseBdev2", 00:08:46.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.710 "is_configured": false, 00:08:46.710 "data_offset": 0, 00:08:46.710 "data_size": 0 00:08:46.710 }, 00:08:46.710 { 00:08:46.710 "name": "BaseBdev3", 00:08:46.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.710 "is_configured": false, 00:08:46.710 "data_offset": 0, 00:08:46.710 "data_size": 0 00:08:46.710 } 00:08:46.710 ] 00:08:46.710 }' 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.710 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.969 [2024-12-06 23:42:58.483274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.969 BaseBdev2 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.969 [ 00:08:46.969 { 00:08:46.969 "name": "BaseBdev2", 00:08:46.969 "aliases": [ 00:08:46.969 "f2917ecb-c83a-48f7-a9a4-59a8be5d4013" 00:08:46.969 ], 00:08:46.969 "product_name": "Malloc disk", 00:08:46.969 "block_size": 512, 00:08:46.969 "num_blocks": 65536, 00:08:46.969 "uuid": "f2917ecb-c83a-48f7-a9a4-59a8be5d4013", 00:08:46.969 "assigned_rate_limits": { 00:08:46.969 "rw_ios_per_sec": 0, 00:08:46.969 "rw_mbytes_per_sec": 0, 00:08:46.969 "r_mbytes_per_sec": 0, 00:08:46.969 "w_mbytes_per_sec": 0 00:08:46.969 }, 00:08:46.969 "claimed": true, 00:08:46.969 "claim_type": "exclusive_write", 00:08:46.969 "zoned": false, 00:08:46.969 "supported_io_types": { 00:08:46.969 "read": true, 00:08:46.969 "write": true, 00:08:46.969 "unmap": true, 00:08:46.969 "flush": true, 00:08:46.969 "reset": true, 00:08:46.969 "nvme_admin": false, 00:08:46.969 "nvme_io": false, 00:08:46.969 "nvme_io_md": false, 00:08:46.969 "write_zeroes": true, 00:08:46.969 "zcopy": true, 00:08:46.969 "get_zone_info": false, 00:08:46.969 "zone_management": false, 00:08:46.969 "zone_append": false, 00:08:46.969 "compare": false, 00:08:46.969 "compare_and_write": false, 00:08:46.969 "abort": true, 00:08:46.969 "seek_hole": false, 00:08:46.969 "seek_data": false, 00:08:46.969 "copy": true, 00:08:46.969 "nvme_iov_md": false 00:08:46.969 }, 00:08:46.969 "memory_domains": [ 00:08:46.969 { 00:08:46.969 "dma_device_id": "system", 00:08:46.969 "dma_device_type": 1 00:08:46.969 }, 00:08:46.969 { 00:08:46.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.969 "dma_device_type": 2 00:08:46.969 } 00:08:46.969 ], 00:08:46.969 "driver_specific": {} 00:08:46.969 } 00:08:46.969 ] 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.969 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.229 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.229 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.229 "name": "Existed_Raid", 00:08:47.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.229 "strip_size_kb": 64, 00:08:47.229 "state": "configuring", 00:08:47.229 "raid_level": "concat", 00:08:47.229 "superblock": false, 00:08:47.229 "num_base_bdevs": 3, 00:08:47.229 "num_base_bdevs_discovered": 2, 00:08:47.229 "num_base_bdevs_operational": 3, 00:08:47.229 "base_bdevs_list": [ 00:08:47.229 { 00:08:47.229 "name": "BaseBdev1", 00:08:47.229 "uuid": "cc63b054-6452-4c9d-bf3d-d7bebe6c04b5", 00:08:47.229 "is_configured": true, 00:08:47.229 "data_offset": 0, 00:08:47.229 "data_size": 65536 00:08:47.229 }, 00:08:47.229 { 00:08:47.229 "name": "BaseBdev2", 00:08:47.229 "uuid": "f2917ecb-c83a-48f7-a9a4-59a8be5d4013", 00:08:47.229 "is_configured": true, 00:08:47.229 "data_offset": 0, 00:08:47.229 "data_size": 65536 00:08:47.229 }, 00:08:47.229 { 00:08:47.229 "name": "BaseBdev3", 00:08:47.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.229 "is_configured": false, 00:08:47.229 "data_offset": 0, 00:08:47.229 "data_size": 0 00:08:47.229 } 00:08:47.229 ] 00:08:47.229 }' 00:08:47.229 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.229 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.489 [2024-12-06 23:42:58.950432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.489 [2024-12-06 23:42:58.950497] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:47.489 [2024-12-06 23:42:58.950512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:47.489 [2024-12-06 23:42:58.950868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:47.489 [2024-12-06 23:42:58.951083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:47.489 [2024-12-06 23:42:58.951102] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:47.489 [2024-12-06 23:42:58.951421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.489 BaseBdev3 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.489 [ 00:08:47.489 { 00:08:47.489 "name": "BaseBdev3", 00:08:47.489 "aliases": [ 00:08:47.489 "9e4fb321-fa07-449d-89f8-efe696d4287f" 00:08:47.489 ], 00:08:47.489 "product_name": "Malloc disk", 00:08:47.489 "block_size": 512, 00:08:47.489 "num_blocks": 65536, 00:08:47.489 "uuid": "9e4fb321-fa07-449d-89f8-efe696d4287f", 00:08:47.489 "assigned_rate_limits": { 00:08:47.489 "rw_ios_per_sec": 0, 00:08:47.489 "rw_mbytes_per_sec": 0, 00:08:47.489 "r_mbytes_per_sec": 0, 00:08:47.489 "w_mbytes_per_sec": 0 00:08:47.489 }, 00:08:47.489 "claimed": true, 00:08:47.489 "claim_type": "exclusive_write", 00:08:47.489 "zoned": false, 00:08:47.489 "supported_io_types": { 00:08:47.489 "read": true, 00:08:47.489 "write": true, 00:08:47.489 "unmap": true, 00:08:47.489 "flush": true, 00:08:47.489 "reset": true, 00:08:47.489 "nvme_admin": false, 00:08:47.489 "nvme_io": false, 00:08:47.489 "nvme_io_md": false, 00:08:47.489 "write_zeroes": true, 00:08:47.489 "zcopy": true, 00:08:47.489 "get_zone_info": false, 00:08:47.489 "zone_management": false, 00:08:47.489 "zone_append": false, 00:08:47.489 "compare": false, 00:08:47.489 "compare_and_write": false, 00:08:47.489 "abort": true, 00:08:47.489 "seek_hole": false, 00:08:47.489 "seek_data": false, 00:08:47.489 "copy": true, 00:08:47.489 "nvme_iov_md": false 00:08:47.489 }, 00:08:47.489 "memory_domains": [ 00:08:47.489 { 00:08:47.489 "dma_device_id": "system", 00:08:47.489 "dma_device_type": 1 00:08:47.489 }, 00:08:47.489 { 00:08:47.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.489 "dma_device_type": 2 00:08:47.489 } 00:08:47.489 ], 00:08:47.489 "driver_specific": {} 00:08:47.489 } 00:08:47.489 ] 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.489 23:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.489 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.489 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.489 "name": "Existed_Raid", 00:08:47.489 "uuid": "1f553cf6-70be-468b-8088-1fd5a967ea86", 00:08:47.489 "strip_size_kb": 64, 00:08:47.489 "state": "online", 00:08:47.489 "raid_level": "concat", 00:08:47.489 "superblock": false, 00:08:47.489 "num_base_bdevs": 3, 00:08:47.489 "num_base_bdevs_discovered": 3, 00:08:47.489 "num_base_bdevs_operational": 3, 00:08:47.489 "base_bdevs_list": [ 00:08:47.489 { 00:08:47.489 "name": "BaseBdev1", 00:08:47.489 "uuid": "cc63b054-6452-4c9d-bf3d-d7bebe6c04b5", 00:08:47.489 "is_configured": true, 00:08:47.489 "data_offset": 0, 00:08:47.489 "data_size": 65536 00:08:47.489 }, 00:08:47.489 { 00:08:47.489 "name": "BaseBdev2", 00:08:47.489 "uuid": "f2917ecb-c83a-48f7-a9a4-59a8be5d4013", 00:08:47.489 "is_configured": true, 00:08:47.489 "data_offset": 0, 00:08:47.489 "data_size": 65536 00:08:47.489 }, 00:08:47.489 { 00:08:47.489 "name": "BaseBdev3", 00:08:47.489 "uuid": "9e4fb321-fa07-449d-89f8-efe696d4287f", 00:08:47.489 "is_configured": true, 00:08:47.489 "data_offset": 0, 00:08:47.489 "data_size": 65536 00:08:47.489 } 00:08:47.489 ] 00:08:47.489 }' 00:08:47.489 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.489 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.059 [2024-12-06 23:42:59.386117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:48.059 "name": "Existed_Raid", 00:08:48.059 "aliases": [ 00:08:48.059 "1f553cf6-70be-468b-8088-1fd5a967ea86" 00:08:48.059 ], 00:08:48.059 "product_name": "Raid Volume", 00:08:48.059 "block_size": 512, 00:08:48.059 "num_blocks": 196608, 00:08:48.059 "uuid": "1f553cf6-70be-468b-8088-1fd5a967ea86", 00:08:48.059 "assigned_rate_limits": { 00:08:48.059 "rw_ios_per_sec": 0, 00:08:48.059 "rw_mbytes_per_sec": 0, 00:08:48.059 "r_mbytes_per_sec": 0, 00:08:48.059 "w_mbytes_per_sec": 0 00:08:48.059 }, 00:08:48.059 "claimed": false, 00:08:48.059 "zoned": false, 00:08:48.059 "supported_io_types": { 00:08:48.059 "read": true, 00:08:48.059 "write": true, 00:08:48.059 "unmap": true, 00:08:48.059 "flush": true, 00:08:48.059 "reset": true, 00:08:48.059 "nvme_admin": false, 00:08:48.059 "nvme_io": false, 00:08:48.059 "nvme_io_md": false, 00:08:48.059 "write_zeroes": true, 00:08:48.059 "zcopy": false, 00:08:48.059 "get_zone_info": false, 00:08:48.059 "zone_management": false, 00:08:48.059 "zone_append": false, 00:08:48.059 "compare": false, 00:08:48.059 "compare_and_write": false, 00:08:48.059 "abort": false, 00:08:48.059 "seek_hole": false, 00:08:48.059 "seek_data": false, 00:08:48.059 "copy": false, 00:08:48.059 "nvme_iov_md": false 00:08:48.059 }, 00:08:48.059 "memory_domains": [ 00:08:48.059 { 00:08:48.059 "dma_device_id": "system", 00:08:48.059 "dma_device_type": 1 00:08:48.059 }, 00:08:48.059 { 00:08:48.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.059 "dma_device_type": 2 00:08:48.059 }, 00:08:48.059 { 00:08:48.059 "dma_device_id": "system", 00:08:48.059 "dma_device_type": 1 00:08:48.059 }, 00:08:48.059 { 00:08:48.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.059 "dma_device_type": 2 00:08:48.059 }, 00:08:48.059 { 00:08:48.059 "dma_device_id": "system", 00:08:48.059 "dma_device_type": 1 00:08:48.059 }, 00:08:48.059 { 00:08:48.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.059 "dma_device_type": 2 00:08:48.059 } 00:08:48.059 ], 00:08:48.059 "driver_specific": { 00:08:48.059 "raid": { 00:08:48.059 "uuid": "1f553cf6-70be-468b-8088-1fd5a967ea86", 00:08:48.059 "strip_size_kb": 64, 00:08:48.059 "state": "online", 00:08:48.059 "raid_level": "concat", 00:08:48.059 "superblock": false, 00:08:48.059 "num_base_bdevs": 3, 00:08:48.059 "num_base_bdevs_discovered": 3, 00:08:48.059 "num_base_bdevs_operational": 3, 00:08:48.059 "base_bdevs_list": [ 00:08:48.059 { 00:08:48.059 "name": "BaseBdev1", 00:08:48.059 "uuid": "cc63b054-6452-4c9d-bf3d-d7bebe6c04b5", 00:08:48.059 "is_configured": true, 00:08:48.059 "data_offset": 0, 00:08:48.059 "data_size": 65536 00:08:48.059 }, 00:08:48.059 { 00:08:48.059 "name": "BaseBdev2", 00:08:48.059 "uuid": "f2917ecb-c83a-48f7-a9a4-59a8be5d4013", 00:08:48.059 "is_configured": true, 00:08:48.059 "data_offset": 0, 00:08:48.059 "data_size": 65536 00:08:48.059 }, 00:08:48.059 { 00:08:48.059 "name": "BaseBdev3", 00:08:48.059 "uuid": "9e4fb321-fa07-449d-89f8-efe696d4287f", 00:08:48.059 "is_configured": true, 00:08:48.059 "data_offset": 0, 00:08:48.059 "data_size": 65536 00:08:48.059 } 00:08:48.059 ] 00:08:48.059 } 00:08:48.059 } 00:08:48.059 }' 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:48.059 BaseBdev2 00:08:48.059 BaseBdev3' 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.319 [2024-12-06 23:42:59.665284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:48.319 [2024-12-06 23:42:59.665327] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.319 [2024-12-06 23:42:59.665387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.319 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.320 "name": "Existed_Raid", 00:08:48.320 "uuid": "1f553cf6-70be-468b-8088-1fd5a967ea86", 00:08:48.320 "strip_size_kb": 64, 00:08:48.320 "state": "offline", 00:08:48.320 "raid_level": "concat", 00:08:48.320 "superblock": false, 00:08:48.320 "num_base_bdevs": 3, 00:08:48.320 "num_base_bdevs_discovered": 2, 00:08:48.320 "num_base_bdevs_operational": 2, 00:08:48.320 "base_bdevs_list": [ 00:08:48.320 { 00:08:48.320 "name": null, 00:08:48.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.320 "is_configured": false, 00:08:48.320 "data_offset": 0, 00:08:48.320 "data_size": 65536 00:08:48.320 }, 00:08:48.320 { 00:08:48.320 "name": "BaseBdev2", 00:08:48.320 "uuid": "f2917ecb-c83a-48f7-a9a4-59a8be5d4013", 00:08:48.320 "is_configured": true, 00:08:48.320 "data_offset": 0, 00:08:48.320 "data_size": 65536 00:08:48.320 }, 00:08:48.320 { 00:08:48.320 "name": "BaseBdev3", 00:08:48.320 "uuid": "9e4fb321-fa07-449d-89f8-efe696d4287f", 00:08:48.320 "is_configured": true, 00:08:48.320 "data_offset": 0, 00:08:48.320 "data_size": 65536 00:08:48.320 } 00:08:48.320 ] 00:08:48.320 }' 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.320 23:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.888 [2024-12-06 23:43:00.265296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.888 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.888 [2024-12-06 23:43:00.427873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:48.889 [2024-12-06 23:43:00.427946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.149 BaseBdev2 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.149 [ 00:08:49.149 { 00:08:49.149 "name": "BaseBdev2", 00:08:49.149 "aliases": [ 00:08:49.149 "bb21d144-5135-4238-84d9-b597b95914e9" 00:08:49.149 ], 00:08:49.149 "product_name": "Malloc disk", 00:08:49.149 "block_size": 512, 00:08:49.149 "num_blocks": 65536, 00:08:49.149 "uuid": "bb21d144-5135-4238-84d9-b597b95914e9", 00:08:49.149 "assigned_rate_limits": { 00:08:49.149 "rw_ios_per_sec": 0, 00:08:49.149 "rw_mbytes_per_sec": 0, 00:08:49.149 "r_mbytes_per_sec": 0, 00:08:49.149 "w_mbytes_per_sec": 0 00:08:49.149 }, 00:08:49.149 "claimed": false, 00:08:49.149 "zoned": false, 00:08:49.149 "supported_io_types": { 00:08:49.149 "read": true, 00:08:49.149 "write": true, 00:08:49.149 "unmap": true, 00:08:49.149 "flush": true, 00:08:49.149 "reset": true, 00:08:49.149 "nvme_admin": false, 00:08:49.149 "nvme_io": false, 00:08:49.149 "nvme_io_md": false, 00:08:49.149 "write_zeroes": true, 00:08:49.149 "zcopy": true, 00:08:49.149 "get_zone_info": false, 00:08:49.149 "zone_management": false, 00:08:49.149 "zone_append": false, 00:08:49.149 "compare": false, 00:08:49.149 "compare_and_write": false, 00:08:49.149 "abort": true, 00:08:49.149 "seek_hole": false, 00:08:49.149 "seek_data": false, 00:08:49.149 "copy": true, 00:08:49.149 "nvme_iov_md": false 00:08:49.149 }, 00:08:49.149 "memory_domains": [ 00:08:49.149 { 00:08:49.149 "dma_device_id": "system", 00:08:49.149 "dma_device_type": 1 00:08:49.149 }, 00:08:49.149 { 00:08:49.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.149 "dma_device_type": 2 00:08:49.149 } 00:08:49.149 ], 00:08:49.149 "driver_specific": {} 00:08:49.149 } 00:08:49.149 ] 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.149 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.411 BaseBdev3 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.411 [ 00:08:49.411 { 00:08:49.411 "name": "BaseBdev3", 00:08:49.411 "aliases": [ 00:08:49.411 "d8905db6-1d19-45f8-8843-00e82db177ac" 00:08:49.411 ], 00:08:49.411 "product_name": "Malloc disk", 00:08:49.411 "block_size": 512, 00:08:49.411 "num_blocks": 65536, 00:08:49.411 "uuid": "d8905db6-1d19-45f8-8843-00e82db177ac", 00:08:49.411 "assigned_rate_limits": { 00:08:49.411 "rw_ios_per_sec": 0, 00:08:49.411 "rw_mbytes_per_sec": 0, 00:08:49.411 "r_mbytes_per_sec": 0, 00:08:49.411 "w_mbytes_per_sec": 0 00:08:49.411 }, 00:08:49.411 "claimed": false, 00:08:49.411 "zoned": false, 00:08:49.411 "supported_io_types": { 00:08:49.411 "read": true, 00:08:49.411 "write": true, 00:08:49.411 "unmap": true, 00:08:49.411 "flush": true, 00:08:49.411 "reset": true, 00:08:49.411 "nvme_admin": false, 00:08:49.411 "nvme_io": false, 00:08:49.411 "nvme_io_md": false, 00:08:49.411 "write_zeroes": true, 00:08:49.411 "zcopy": true, 00:08:49.411 "get_zone_info": false, 00:08:49.411 "zone_management": false, 00:08:49.411 "zone_append": false, 00:08:49.411 "compare": false, 00:08:49.411 "compare_and_write": false, 00:08:49.411 "abort": true, 00:08:49.411 "seek_hole": false, 00:08:49.411 "seek_data": false, 00:08:49.411 "copy": true, 00:08:49.411 "nvme_iov_md": false 00:08:49.411 }, 00:08:49.411 "memory_domains": [ 00:08:49.411 { 00:08:49.411 "dma_device_id": "system", 00:08:49.411 "dma_device_type": 1 00:08:49.411 }, 00:08:49.411 { 00:08:49.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.411 "dma_device_type": 2 00:08:49.411 } 00:08:49.411 ], 00:08:49.411 "driver_specific": {} 00:08:49.411 } 00:08:49.411 ] 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.411 [2024-12-06 23:43:00.752902] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.411 [2024-12-06 23:43:00.752959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.411 [2024-12-06 23:43:00.752981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.411 [2024-12-06 23:43:00.755099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.411 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.412 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.412 "name": "Existed_Raid", 00:08:49.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.412 "strip_size_kb": 64, 00:08:49.412 "state": "configuring", 00:08:49.412 "raid_level": "concat", 00:08:49.412 "superblock": false, 00:08:49.412 "num_base_bdevs": 3, 00:08:49.412 "num_base_bdevs_discovered": 2, 00:08:49.412 "num_base_bdevs_operational": 3, 00:08:49.412 "base_bdevs_list": [ 00:08:49.412 { 00:08:49.412 "name": "BaseBdev1", 00:08:49.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.412 "is_configured": false, 00:08:49.412 "data_offset": 0, 00:08:49.412 "data_size": 0 00:08:49.412 }, 00:08:49.412 { 00:08:49.412 "name": "BaseBdev2", 00:08:49.412 "uuid": "bb21d144-5135-4238-84d9-b597b95914e9", 00:08:49.412 "is_configured": true, 00:08:49.412 "data_offset": 0, 00:08:49.412 "data_size": 65536 00:08:49.412 }, 00:08:49.412 { 00:08:49.412 "name": "BaseBdev3", 00:08:49.412 "uuid": "d8905db6-1d19-45f8-8843-00e82db177ac", 00:08:49.412 "is_configured": true, 00:08:49.412 "data_offset": 0, 00:08:49.412 "data_size": 65536 00:08:49.412 } 00:08:49.412 ] 00:08:49.412 }' 00:08:49.412 23:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.412 23:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.671 [2024-12-06 23:43:01.192276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.671 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.940 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.940 "name": "Existed_Raid", 00:08:49.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.940 "strip_size_kb": 64, 00:08:49.940 "state": "configuring", 00:08:49.940 "raid_level": "concat", 00:08:49.940 "superblock": false, 00:08:49.940 "num_base_bdevs": 3, 00:08:49.940 "num_base_bdevs_discovered": 1, 00:08:49.940 "num_base_bdevs_operational": 3, 00:08:49.940 "base_bdevs_list": [ 00:08:49.940 { 00:08:49.940 "name": "BaseBdev1", 00:08:49.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.940 "is_configured": false, 00:08:49.940 "data_offset": 0, 00:08:49.940 "data_size": 0 00:08:49.940 }, 00:08:49.940 { 00:08:49.940 "name": null, 00:08:49.940 "uuid": "bb21d144-5135-4238-84d9-b597b95914e9", 00:08:49.940 "is_configured": false, 00:08:49.940 "data_offset": 0, 00:08:49.940 "data_size": 65536 00:08:49.940 }, 00:08:49.940 { 00:08:49.940 "name": "BaseBdev3", 00:08:49.940 "uuid": "d8905db6-1d19-45f8-8843-00e82db177ac", 00:08:49.940 "is_configured": true, 00:08:49.940 "data_offset": 0, 00:08:49.940 "data_size": 65536 00:08:49.940 } 00:08:49.940 ] 00:08:49.940 }' 00:08:49.940 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.940 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.199 [2024-12-06 23:43:01.650729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.199 BaseBdev1 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.199 [ 00:08:50.199 { 00:08:50.199 "name": "BaseBdev1", 00:08:50.199 "aliases": [ 00:08:50.199 "36da9694-2922-4380-972a-4b8613a50201" 00:08:50.199 ], 00:08:50.199 "product_name": "Malloc disk", 00:08:50.199 "block_size": 512, 00:08:50.199 "num_blocks": 65536, 00:08:50.199 "uuid": "36da9694-2922-4380-972a-4b8613a50201", 00:08:50.199 "assigned_rate_limits": { 00:08:50.199 "rw_ios_per_sec": 0, 00:08:50.199 "rw_mbytes_per_sec": 0, 00:08:50.199 "r_mbytes_per_sec": 0, 00:08:50.199 "w_mbytes_per_sec": 0 00:08:50.199 }, 00:08:50.199 "claimed": true, 00:08:50.199 "claim_type": "exclusive_write", 00:08:50.199 "zoned": false, 00:08:50.199 "supported_io_types": { 00:08:50.199 "read": true, 00:08:50.199 "write": true, 00:08:50.199 "unmap": true, 00:08:50.199 "flush": true, 00:08:50.199 "reset": true, 00:08:50.199 "nvme_admin": false, 00:08:50.199 "nvme_io": false, 00:08:50.199 "nvme_io_md": false, 00:08:50.199 "write_zeroes": true, 00:08:50.199 "zcopy": true, 00:08:50.199 "get_zone_info": false, 00:08:50.199 "zone_management": false, 00:08:50.199 "zone_append": false, 00:08:50.199 "compare": false, 00:08:50.199 "compare_and_write": false, 00:08:50.199 "abort": true, 00:08:50.199 "seek_hole": false, 00:08:50.199 "seek_data": false, 00:08:50.199 "copy": true, 00:08:50.199 "nvme_iov_md": false 00:08:50.199 }, 00:08:50.199 "memory_domains": [ 00:08:50.199 { 00:08:50.199 "dma_device_id": "system", 00:08:50.199 "dma_device_type": 1 00:08:50.199 }, 00:08:50.199 { 00:08:50.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.199 "dma_device_type": 2 00:08:50.199 } 00:08:50.199 ], 00:08:50.199 "driver_specific": {} 00:08:50.199 } 00:08:50.199 ] 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.199 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.200 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.200 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.200 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.200 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.200 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.200 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.200 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.200 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.200 "name": "Existed_Raid", 00:08:50.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.200 "strip_size_kb": 64, 00:08:50.200 "state": "configuring", 00:08:50.200 "raid_level": "concat", 00:08:50.200 "superblock": false, 00:08:50.200 "num_base_bdevs": 3, 00:08:50.200 "num_base_bdevs_discovered": 2, 00:08:50.200 "num_base_bdevs_operational": 3, 00:08:50.200 "base_bdevs_list": [ 00:08:50.200 { 00:08:50.200 "name": "BaseBdev1", 00:08:50.200 "uuid": "36da9694-2922-4380-972a-4b8613a50201", 00:08:50.200 "is_configured": true, 00:08:50.200 "data_offset": 0, 00:08:50.200 "data_size": 65536 00:08:50.200 }, 00:08:50.200 { 00:08:50.200 "name": null, 00:08:50.200 "uuid": "bb21d144-5135-4238-84d9-b597b95914e9", 00:08:50.200 "is_configured": false, 00:08:50.200 "data_offset": 0, 00:08:50.200 "data_size": 65536 00:08:50.200 }, 00:08:50.200 { 00:08:50.200 "name": "BaseBdev3", 00:08:50.200 "uuid": "d8905db6-1d19-45f8-8843-00e82db177ac", 00:08:50.200 "is_configured": true, 00:08:50.200 "data_offset": 0, 00:08:50.200 "data_size": 65536 00:08:50.200 } 00:08:50.200 ] 00:08:50.200 }' 00:08:50.200 23:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.200 23:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.771 [2024-12-06 23:43:02.097987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.771 "name": "Existed_Raid", 00:08:50.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.771 "strip_size_kb": 64, 00:08:50.771 "state": "configuring", 00:08:50.771 "raid_level": "concat", 00:08:50.771 "superblock": false, 00:08:50.771 "num_base_bdevs": 3, 00:08:50.771 "num_base_bdevs_discovered": 1, 00:08:50.771 "num_base_bdevs_operational": 3, 00:08:50.771 "base_bdevs_list": [ 00:08:50.771 { 00:08:50.771 "name": "BaseBdev1", 00:08:50.771 "uuid": "36da9694-2922-4380-972a-4b8613a50201", 00:08:50.771 "is_configured": true, 00:08:50.771 "data_offset": 0, 00:08:50.771 "data_size": 65536 00:08:50.771 }, 00:08:50.771 { 00:08:50.771 "name": null, 00:08:50.771 "uuid": "bb21d144-5135-4238-84d9-b597b95914e9", 00:08:50.771 "is_configured": false, 00:08:50.771 "data_offset": 0, 00:08:50.771 "data_size": 65536 00:08:50.771 }, 00:08:50.771 { 00:08:50.771 "name": null, 00:08:50.771 "uuid": "d8905db6-1d19-45f8-8843-00e82db177ac", 00:08:50.771 "is_configured": false, 00:08:50.771 "data_offset": 0, 00:08:50.771 "data_size": 65536 00:08:50.771 } 00:08:50.771 ] 00:08:50.771 }' 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.771 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.031 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.031 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:51.031 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.031 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.031 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.031 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:51.031 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:51.031 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.031 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.290 [2024-12-06 23:43:02.593224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.290 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.290 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.290 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.290 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.290 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.290 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.290 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.290 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.291 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.291 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.291 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.291 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.291 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.291 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.291 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.291 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.291 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.291 "name": "Existed_Raid", 00:08:51.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.291 "strip_size_kb": 64, 00:08:51.291 "state": "configuring", 00:08:51.291 "raid_level": "concat", 00:08:51.291 "superblock": false, 00:08:51.291 "num_base_bdevs": 3, 00:08:51.291 "num_base_bdevs_discovered": 2, 00:08:51.291 "num_base_bdevs_operational": 3, 00:08:51.291 "base_bdevs_list": [ 00:08:51.291 { 00:08:51.291 "name": "BaseBdev1", 00:08:51.291 "uuid": "36da9694-2922-4380-972a-4b8613a50201", 00:08:51.291 "is_configured": true, 00:08:51.291 "data_offset": 0, 00:08:51.291 "data_size": 65536 00:08:51.291 }, 00:08:51.291 { 00:08:51.291 "name": null, 00:08:51.291 "uuid": "bb21d144-5135-4238-84d9-b597b95914e9", 00:08:51.291 "is_configured": false, 00:08:51.291 "data_offset": 0, 00:08:51.291 "data_size": 65536 00:08:51.291 }, 00:08:51.291 { 00:08:51.291 "name": "BaseBdev3", 00:08:51.291 "uuid": "d8905db6-1d19-45f8-8843-00e82db177ac", 00:08:51.291 "is_configured": true, 00:08:51.291 "data_offset": 0, 00:08:51.291 "data_size": 65536 00:08:51.291 } 00:08:51.291 ] 00:08:51.291 }' 00:08:51.291 23:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.291 23:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.551 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.551 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:51.551 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.551 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.551 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.551 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:51.551 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.551 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.551 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.551 [2024-12-06 23:43:03.080370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.811 "name": "Existed_Raid", 00:08:51.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.811 "strip_size_kb": 64, 00:08:51.811 "state": "configuring", 00:08:51.811 "raid_level": "concat", 00:08:51.811 "superblock": false, 00:08:51.811 "num_base_bdevs": 3, 00:08:51.811 "num_base_bdevs_discovered": 1, 00:08:51.811 "num_base_bdevs_operational": 3, 00:08:51.811 "base_bdevs_list": [ 00:08:51.811 { 00:08:51.811 "name": null, 00:08:51.811 "uuid": "36da9694-2922-4380-972a-4b8613a50201", 00:08:51.811 "is_configured": false, 00:08:51.811 "data_offset": 0, 00:08:51.811 "data_size": 65536 00:08:51.811 }, 00:08:51.811 { 00:08:51.811 "name": null, 00:08:51.811 "uuid": "bb21d144-5135-4238-84d9-b597b95914e9", 00:08:51.811 "is_configured": false, 00:08:51.811 "data_offset": 0, 00:08:51.811 "data_size": 65536 00:08:51.811 }, 00:08:51.811 { 00:08:51.811 "name": "BaseBdev3", 00:08:51.811 "uuid": "d8905db6-1d19-45f8-8843-00e82db177ac", 00:08:51.811 "is_configured": true, 00:08:51.811 "data_offset": 0, 00:08:51.811 "data_size": 65536 00:08:51.811 } 00:08:51.811 ] 00:08:51.811 }' 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.811 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.380 [2024-12-06 23:43:03.690888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.380 "name": "Existed_Raid", 00:08:52.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.380 "strip_size_kb": 64, 00:08:52.380 "state": "configuring", 00:08:52.380 "raid_level": "concat", 00:08:52.380 "superblock": false, 00:08:52.380 "num_base_bdevs": 3, 00:08:52.380 "num_base_bdevs_discovered": 2, 00:08:52.380 "num_base_bdevs_operational": 3, 00:08:52.380 "base_bdevs_list": [ 00:08:52.380 { 00:08:52.380 "name": null, 00:08:52.380 "uuid": "36da9694-2922-4380-972a-4b8613a50201", 00:08:52.380 "is_configured": false, 00:08:52.380 "data_offset": 0, 00:08:52.380 "data_size": 65536 00:08:52.380 }, 00:08:52.380 { 00:08:52.380 "name": "BaseBdev2", 00:08:52.380 "uuid": "bb21d144-5135-4238-84d9-b597b95914e9", 00:08:52.380 "is_configured": true, 00:08:52.380 "data_offset": 0, 00:08:52.380 "data_size": 65536 00:08:52.380 }, 00:08:52.380 { 00:08:52.380 "name": "BaseBdev3", 00:08:52.380 "uuid": "d8905db6-1d19-45f8-8843-00e82db177ac", 00:08:52.380 "is_configured": true, 00:08:52.380 "data_offset": 0, 00:08:52.380 "data_size": 65536 00:08:52.380 } 00:08:52.380 ] 00:08:52.380 }' 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.380 23:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 36da9694-2922-4380-972a-4b8613a50201 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.639 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.899 [2024-12-06 23:43:04.228855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:52.899 [2024-12-06 23:43:04.228906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:52.899 [2024-12-06 23:43:04.228918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:52.899 [2024-12-06 23:43:04.229190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:52.899 [2024-12-06 23:43:04.229364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:52.899 [2024-12-06 23:43:04.229386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:52.899 [2024-12-06 23:43:04.229640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.899 NewBaseBdev 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.899 [ 00:08:52.899 { 00:08:52.899 "name": "NewBaseBdev", 00:08:52.899 "aliases": [ 00:08:52.899 "36da9694-2922-4380-972a-4b8613a50201" 00:08:52.899 ], 00:08:52.899 "product_name": "Malloc disk", 00:08:52.899 "block_size": 512, 00:08:52.899 "num_blocks": 65536, 00:08:52.899 "uuid": "36da9694-2922-4380-972a-4b8613a50201", 00:08:52.899 "assigned_rate_limits": { 00:08:52.899 "rw_ios_per_sec": 0, 00:08:52.899 "rw_mbytes_per_sec": 0, 00:08:52.899 "r_mbytes_per_sec": 0, 00:08:52.899 "w_mbytes_per_sec": 0 00:08:52.899 }, 00:08:52.899 "claimed": true, 00:08:52.899 "claim_type": "exclusive_write", 00:08:52.899 "zoned": false, 00:08:52.899 "supported_io_types": { 00:08:52.899 "read": true, 00:08:52.899 "write": true, 00:08:52.899 "unmap": true, 00:08:52.899 "flush": true, 00:08:52.899 "reset": true, 00:08:52.899 "nvme_admin": false, 00:08:52.899 "nvme_io": false, 00:08:52.899 "nvme_io_md": false, 00:08:52.899 "write_zeroes": true, 00:08:52.899 "zcopy": true, 00:08:52.899 "get_zone_info": false, 00:08:52.899 "zone_management": false, 00:08:52.899 "zone_append": false, 00:08:52.899 "compare": false, 00:08:52.899 "compare_and_write": false, 00:08:52.899 "abort": true, 00:08:52.899 "seek_hole": false, 00:08:52.899 "seek_data": false, 00:08:52.899 "copy": true, 00:08:52.899 "nvme_iov_md": false 00:08:52.899 }, 00:08:52.899 "memory_domains": [ 00:08:52.899 { 00:08:52.899 "dma_device_id": "system", 00:08:52.899 "dma_device_type": 1 00:08:52.899 }, 00:08:52.899 { 00:08:52.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.899 "dma_device_type": 2 00:08:52.899 } 00:08:52.899 ], 00:08:52.899 "driver_specific": {} 00:08:52.899 } 00:08:52.899 ] 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.899 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.899 "name": "Existed_Raid", 00:08:52.899 "uuid": "db0b4e05-c456-4441-9c80-7bcd2baaa91d", 00:08:52.899 "strip_size_kb": 64, 00:08:52.899 "state": "online", 00:08:52.899 "raid_level": "concat", 00:08:52.899 "superblock": false, 00:08:52.899 "num_base_bdevs": 3, 00:08:52.899 "num_base_bdevs_discovered": 3, 00:08:52.899 "num_base_bdevs_operational": 3, 00:08:52.899 "base_bdevs_list": [ 00:08:52.899 { 00:08:52.899 "name": "NewBaseBdev", 00:08:52.899 "uuid": "36da9694-2922-4380-972a-4b8613a50201", 00:08:52.899 "is_configured": true, 00:08:52.899 "data_offset": 0, 00:08:52.899 "data_size": 65536 00:08:52.899 }, 00:08:52.899 { 00:08:52.899 "name": "BaseBdev2", 00:08:52.899 "uuid": "bb21d144-5135-4238-84d9-b597b95914e9", 00:08:52.899 "is_configured": true, 00:08:52.899 "data_offset": 0, 00:08:52.899 "data_size": 65536 00:08:52.900 }, 00:08:52.900 { 00:08:52.900 "name": "BaseBdev3", 00:08:52.900 "uuid": "d8905db6-1d19-45f8-8843-00e82db177ac", 00:08:52.900 "is_configured": true, 00:08:52.900 "data_offset": 0, 00:08:52.900 "data_size": 65536 00:08:52.900 } 00:08:52.900 ] 00:08:52.900 }' 00:08:52.900 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.900 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.159 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:53.159 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:53.159 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.160 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.160 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.160 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.160 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:53.160 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.160 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.160 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.160 [2024-12-06 23:43:04.680479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.160 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.160 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.160 "name": "Existed_Raid", 00:08:53.160 "aliases": [ 00:08:53.160 "db0b4e05-c456-4441-9c80-7bcd2baaa91d" 00:08:53.160 ], 00:08:53.160 "product_name": "Raid Volume", 00:08:53.160 "block_size": 512, 00:08:53.160 "num_blocks": 196608, 00:08:53.160 "uuid": "db0b4e05-c456-4441-9c80-7bcd2baaa91d", 00:08:53.160 "assigned_rate_limits": { 00:08:53.160 "rw_ios_per_sec": 0, 00:08:53.160 "rw_mbytes_per_sec": 0, 00:08:53.160 "r_mbytes_per_sec": 0, 00:08:53.160 "w_mbytes_per_sec": 0 00:08:53.160 }, 00:08:53.160 "claimed": false, 00:08:53.160 "zoned": false, 00:08:53.160 "supported_io_types": { 00:08:53.160 "read": true, 00:08:53.160 "write": true, 00:08:53.160 "unmap": true, 00:08:53.160 "flush": true, 00:08:53.160 "reset": true, 00:08:53.160 "nvme_admin": false, 00:08:53.160 "nvme_io": false, 00:08:53.160 "nvme_io_md": false, 00:08:53.160 "write_zeroes": true, 00:08:53.160 "zcopy": false, 00:08:53.160 "get_zone_info": false, 00:08:53.160 "zone_management": false, 00:08:53.160 "zone_append": false, 00:08:53.160 "compare": false, 00:08:53.160 "compare_and_write": false, 00:08:53.160 "abort": false, 00:08:53.160 "seek_hole": false, 00:08:53.160 "seek_data": false, 00:08:53.160 "copy": false, 00:08:53.160 "nvme_iov_md": false 00:08:53.160 }, 00:08:53.160 "memory_domains": [ 00:08:53.160 { 00:08:53.160 "dma_device_id": "system", 00:08:53.160 "dma_device_type": 1 00:08:53.160 }, 00:08:53.160 { 00:08:53.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.160 "dma_device_type": 2 00:08:53.160 }, 00:08:53.160 { 00:08:53.160 "dma_device_id": "system", 00:08:53.160 "dma_device_type": 1 00:08:53.160 }, 00:08:53.160 { 00:08:53.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.160 "dma_device_type": 2 00:08:53.160 }, 00:08:53.160 { 00:08:53.160 "dma_device_id": "system", 00:08:53.160 "dma_device_type": 1 00:08:53.160 }, 00:08:53.160 { 00:08:53.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.160 "dma_device_type": 2 00:08:53.160 } 00:08:53.160 ], 00:08:53.160 "driver_specific": { 00:08:53.160 "raid": { 00:08:53.160 "uuid": "db0b4e05-c456-4441-9c80-7bcd2baaa91d", 00:08:53.160 "strip_size_kb": 64, 00:08:53.160 "state": "online", 00:08:53.160 "raid_level": "concat", 00:08:53.160 "superblock": false, 00:08:53.160 "num_base_bdevs": 3, 00:08:53.160 "num_base_bdevs_discovered": 3, 00:08:53.160 "num_base_bdevs_operational": 3, 00:08:53.160 "base_bdevs_list": [ 00:08:53.160 { 00:08:53.160 "name": "NewBaseBdev", 00:08:53.160 "uuid": "36da9694-2922-4380-972a-4b8613a50201", 00:08:53.160 "is_configured": true, 00:08:53.160 "data_offset": 0, 00:08:53.160 "data_size": 65536 00:08:53.160 }, 00:08:53.160 { 00:08:53.160 "name": "BaseBdev2", 00:08:53.160 "uuid": "bb21d144-5135-4238-84d9-b597b95914e9", 00:08:53.160 "is_configured": true, 00:08:53.160 "data_offset": 0, 00:08:53.160 "data_size": 65536 00:08:53.160 }, 00:08:53.160 { 00:08:53.160 "name": "BaseBdev3", 00:08:53.160 "uuid": "d8905db6-1d19-45f8-8843-00e82db177ac", 00:08:53.160 "is_configured": true, 00:08:53.160 "data_offset": 0, 00:08:53.160 "data_size": 65536 00:08:53.160 } 00:08:53.160 ] 00:08:53.160 } 00:08:53.160 } 00:08:53.160 }' 00:08:53.160 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:53.419 BaseBdev2 00:08:53.419 BaseBdev3' 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.419 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.420 [2024-12-06 23:43:04.943688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.420 [2024-12-06 23:43:04.943732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.420 [2024-12-06 23:43:04.943822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.420 [2024-12-06 23:43:04.943890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.420 [2024-12-06 23:43:04.943907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65521 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65521 ']' 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65521 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65521 00:08:53.420 killing process with pid 65521 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65521' 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65521 00:08:53.420 [2024-12-06 23:43:04.978201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:53.420 23:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65521 00:08:53.990 [2024-12-06 23:43:05.311949] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.368 23:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:55.368 00:08:55.368 real 0m10.426s 00:08:55.368 user 0m16.283s 00:08:55.368 sys 0m1.870s 00:08:55.368 23:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.368 23:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.368 ************************************ 00:08:55.368 END TEST raid_state_function_test 00:08:55.368 ************************************ 00:08:55.368 23:43:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:55.368 23:43:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:55.368 23:43:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.368 23:43:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.368 ************************************ 00:08:55.368 START TEST raid_state_function_test_sb 00:08:55.368 ************************************ 00:08:55.368 23:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:55.368 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66142 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:55.369 Process raid pid: 66142 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66142' 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66142 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66142 ']' 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.369 23:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.369 [2024-12-06 23:43:06.731639] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:08:55.369 [2024-12-06 23:43:06.731781] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.369 [2024-12-06 23:43:06.900447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.629 [2024-12-06 23:43:07.040383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.890 [2024-12-06 23:43:07.287784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.890 [2024-12-06 23:43:07.287832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.150 [2024-12-06 23:43:07.543732] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.150 [2024-12-06 23:43:07.543794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.150 [2024-12-06 23:43:07.543804] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.150 [2024-12-06 23:43:07.543815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.150 [2024-12-06 23:43:07.543821] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.150 [2024-12-06 23:43:07.543831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.150 "name": "Existed_Raid", 00:08:56.150 "uuid": "15a8c0bc-4f82-47f8-b6a8-fd17df0dad60", 00:08:56.150 "strip_size_kb": 64, 00:08:56.150 "state": "configuring", 00:08:56.150 "raid_level": "concat", 00:08:56.150 "superblock": true, 00:08:56.150 "num_base_bdevs": 3, 00:08:56.150 "num_base_bdevs_discovered": 0, 00:08:56.150 "num_base_bdevs_operational": 3, 00:08:56.150 "base_bdevs_list": [ 00:08:56.150 { 00:08:56.150 "name": "BaseBdev1", 00:08:56.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.150 "is_configured": false, 00:08:56.150 "data_offset": 0, 00:08:56.150 "data_size": 0 00:08:56.150 }, 00:08:56.150 { 00:08:56.150 "name": "BaseBdev2", 00:08:56.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.150 "is_configured": false, 00:08:56.150 "data_offset": 0, 00:08:56.150 "data_size": 0 00:08:56.150 }, 00:08:56.150 { 00:08:56.150 "name": "BaseBdev3", 00:08:56.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.150 "is_configured": false, 00:08:56.150 "data_offset": 0, 00:08:56.150 "data_size": 0 00:08:56.150 } 00:08:56.150 ] 00:08:56.150 }' 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.150 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.720 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.720 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.720 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.720 [2024-12-06 23:43:07.978965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.720 [2024-12-06 23:43:07.979026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:56.720 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.720 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.720 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.720 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.720 [2024-12-06 23:43:07.990932] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.720 [2024-12-06 23:43:07.990984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.720 [2024-12-06 23:43:07.990995] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.720 [2024-12-06 23:43:07.991004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.720 [2024-12-06 23:43:07.991011] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.720 [2024-12-06 23:43:07.991020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.720 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.720 23:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:56.720 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.720 23:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.720 [2024-12-06 23:43:08.047250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.720 BaseBdev1 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.721 [ 00:08:56.721 { 00:08:56.721 "name": "BaseBdev1", 00:08:56.721 "aliases": [ 00:08:56.721 "8917149e-3adf-463b-8d85-918320620d35" 00:08:56.721 ], 00:08:56.721 "product_name": "Malloc disk", 00:08:56.721 "block_size": 512, 00:08:56.721 "num_blocks": 65536, 00:08:56.721 "uuid": "8917149e-3adf-463b-8d85-918320620d35", 00:08:56.721 "assigned_rate_limits": { 00:08:56.721 "rw_ios_per_sec": 0, 00:08:56.721 "rw_mbytes_per_sec": 0, 00:08:56.721 "r_mbytes_per_sec": 0, 00:08:56.721 "w_mbytes_per_sec": 0 00:08:56.721 }, 00:08:56.721 "claimed": true, 00:08:56.721 "claim_type": "exclusive_write", 00:08:56.721 "zoned": false, 00:08:56.721 "supported_io_types": { 00:08:56.721 "read": true, 00:08:56.721 "write": true, 00:08:56.721 "unmap": true, 00:08:56.721 "flush": true, 00:08:56.721 "reset": true, 00:08:56.721 "nvme_admin": false, 00:08:56.721 "nvme_io": false, 00:08:56.721 "nvme_io_md": false, 00:08:56.721 "write_zeroes": true, 00:08:56.721 "zcopy": true, 00:08:56.721 "get_zone_info": false, 00:08:56.721 "zone_management": false, 00:08:56.721 "zone_append": false, 00:08:56.721 "compare": false, 00:08:56.721 "compare_and_write": false, 00:08:56.721 "abort": true, 00:08:56.721 "seek_hole": false, 00:08:56.721 "seek_data": false, 00:08:56.721 "copy": true, 00:08:56.721 "nvme_iov_md": false 00:08:56.721 }, 00:08:56.721 "memory_domains": [ 00:08:56.721 { 00:08:56.721 "dma_device_id": "system", 00:08:56.721 "dma_device_type": 1 00:08:56.721 }, 00:08:56.721 { 00:08:56.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.721 "dma_device_type": 2 00:08:56.721 } 00:08:56.721 ], 00:08:56.721 "driver_specific": {} 00:08:56.721 } 00:08:56.721 ] 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.721 "name": "Existed_Raid", 00:08:56.721 "uuid": "93fa1442-cf62-464a-b698-51b21b2104fb", 00:08:56.721 "strip_size_kb": 64, 00:08:56.721 "state": "configuring", 00:08:56.721 "raid_level": "concat", 00:08:56.721 "superblock": true, 00:08:56.721 "num_base_bdevs": 3, 00:08:56.721 "num_base_bdevs_discovered": 1, 00:08:56.721 "num_base_bdevs_operational": 3, 00:08:56.721 "base_bdevs_list": [ 00:08:56.721 { 00:08:56.721 "name": "BaseBdev1", 00:08:56.721 "uuid": "8917149e-3adf-463b-8d85-918320620d35", 00:08:56.721 "is_configured": true, 00:08:56.721 "data_offset": 2048, 00:08:56.721 "data_size": 63488 00:08:56.721 }, 00:08:56.721 { 00:08:56.721 "name": "BaseBdev2", 00:08:56.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.721 "is_configured": false, 00:08:56.721 "data_offset": 0, 00:08:56.721 "data_size": 0 00:08:56.721 }, 00:08:56.721 { 00:08:56.721 "name": "BaseBdev3", 00:08:56.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.721 "is_configured": false, 00:08:56.721 "data_offset": 0, 00:08:56.721 "data_size": 0 00:08:56.721 } 00:08:56.721 ] 00:08:56.721 }' 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.721 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.291 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.291 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.291 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.291 [2024-12-06 23:43:08.566582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.291 [2024-12-06 23:43:08.566676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:57.291 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.291 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.291 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.291 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.292 [2024-12-06 23:43:08.574606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.292 [2024-12-06 23:43:08.576820] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.292 [2024-12-06 23:43:08.576862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.292 [2024-12-06 23:43:08.576872] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.292 [2024-12-06 23:43:08.576880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.292 "name": "Existed_Raid", 00:08:57.292 "uuid": "e1ed9ad3-a4bb-42c6-a164-eb0f743030ac", 00:08:57.292 "strip_size_kb": 64, 00:08:57.292 "state": "configuring", 00:08:57.292 "raid_level": "concat", 00:08:57.292 "superblock": true, 00:08:57.292 "num_base_bdevs": 3, 00:08:57.292 "num_base_bdevs_discovered": 1, 00:08:57.292 "num_base_bdevs_operational": 3, 00:08:57.292 "base_bdevs_list": [ 00:08:57.292 { 00:08:57.292 "name": "BaseBdev1", 00:08:57.292 "uuid": "8917149e-3adf-463b-8d85-918320620d35", 00:08:57.292 "is_configured": true, 00:08:57.292 "data_offset": 2048, 00:08:57.292 "data_size": 63488 00:08:57.292 }, 00:08:57.292 { 00:08:57.292 "name": "BaseBdev2", 00:08:57.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.292 "is_configured": false, 00:08:57.292 "data_offset": 0, 00:08:57.292 "data_size": 0 00:08:57.292 }, 00:08:57.292 { 00:08:57.292 "name": "BaseBdev3", 00:08:57.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.292 "is_configured": false, 00:08:57.292 "data_offset": 0, 00:08:57.292 "data_size": 0 00:08:57.292 } 00:08:57.292 ] 00:08:57.292 }' 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.292 23:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.552 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:57.552 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.552 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.552 [2024-12-06 23:43:09.057668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.552 BaseBdev2 00:08:57.552 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.552 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:57.552 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:57.552 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.552 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:57.552 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.552 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.553 [ 00:08:57.553 { 00:08:57.553 "name": "BaseBdev2", 00:08:57.553 "aliases": [ 00:08:57.553 "71290c95-b930-4ca0-af67-8f60fc70ee1a" 00:08:57.553 ], 00:08:57.553 "product_name": "Malloc disk", 00:08:57.553 "block_size": 512, 00:08:57.553 "num_blocks": 65536, 00:08:57.553 "uuid": "71290c95-b930-4ca0-af67-8f60fc70ee1a", 00:08:57.553 "assigned_rate_limits": { 00:08:57.553 "rw_ios_per_sec": 0, 00:08:57.553 "rw_mbytes_per_sec": 0, 00:08:57.553 "r_mbytes_per_sec": 0, 00:08:57.553 "w_mbytes_per_sec": 0 00:08:57.553 }, 00:08:57.553 "claimed": true, 00:08:57.553 "claim_type": "exclusive_write", 00:08:57.553 "zoned": false, 00:08:57.553 "supported_io_types": { 00:08:57.553 "read": true, 00:08:57.553 "write": true, 00:08:57.553 "unmap": true, 00:08:57.553 "flush": true, 00:08:57.553 "reset": true, 00:08:57.553 "nvme_admin": false, 00:08:57.553 "nvme_io": false, 00:08:57.553 "nvme_io_md": false, 00:08:57.553 "write_zeroes": true, 00:08:57.553 "zcopy": true, 00:08:57.553 "get_zone_info": false, 00:08:57.553 "zone_management": false, 00:08:57.553 "zone_append": false, 00:08:57.553 "compare": false, 00:08:57.553 "compare_and_write": false, 00:08:57.553 "abort": true, 00:08:57.553 "seek_hole": false, 00:08:57.553 "seek_data": false, 00:08:57.553 "copy": true, 00:08:57.553 "nvme_iov_md": false 00:08:57.553 }, 00:08:57.553 "memory_domains": [ 00:08:57.553 { 00:08:57.553 "dma_device_id": "system", 00:08:57.553 "dma_device_type": 1 00:08:57.553 }, 00:08:57.553 { 00:08:57.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.553 "dma_device_type": 2 00:08:57.553 } 00:08:57.553 ], 00:08:57.553 "driver_specific": {} 00:08:57.553 } 00:08:57.553 ] 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.553 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.813 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.813 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.813 "name": "Existed_Raid", 00:08:57.813 "uuid": "e1ed9ad3-a4bb-42c6-a164-eb0f743030ac", 00:08:57.813 "strip_size_kb": 64, 00:08:57.813 "state": "configuring", 00:08:57.813 "raid_level": "concat", 00:08:57.813 "superblock": true, 00:08:57.813 "num_base_bdevs": 3, 00:08:57.813 "num_base_bdevs_discovered": 2, 00:08:57.813 "num_base_bdevs_operational": 3, 00:08:57.813 "base_bdevs_list": [ 00:08:57.813 { 00:08:57.813 "name": "BaseBdev1", 00:08:57.813 "uuid": "8917149e-3adf-463b-8d85-918320620d35", 00:08:57.813 "is_configured": true, 00:08:57.813 "data_offset": 2048, 00:08:57.813 "data_size": 63488 00:08:57.813 }, 00:08:57.813 { 00:08:57.813 "name": "BaseBdev2", 00:08:57.813 "uuid": "71290c95-b930-4ca0-af67-8f60fc70ee1a", 00:08:57.813 "is_configured": true, 00:08:57.813 "data_offset": 2048, 00:08:57.813 "data_size": 63488 00:08:57.813 }, 00:08:57.813 { 00:08:57.813 "name": "BaseBdev3", 00:08:57.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.813 "is_configured": false, 00:08:57.813 "data_offset": 0, 00:08:57.813 "data_size": 0 00:08:57.813 } 00:08:57.813 ] 00:08:57.813 }' 00:08:57.813 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.813 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.073 [2024-12-06 23:43:09.565739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.073 [2024-12-06 23:43:09.566014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:58.073 [2024-12-06 23:43:09.566036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:58.073 [2024-12-06 23:43:09.566326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:58.073 BaseBdev3 00:08:58.073 [2024-12-06 23:43:09.566509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:58.073 [2024-12-06 23:43:09.566519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:58.073 [2024-12-06 23:43:09.566664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.073 [ 00:08:58.073 { 00:08:58.073 "name": "BaseBdev3", 00:08:58.073 "aliases": [ 00:08:58.073 "f2316e1b-b205-44c4-aaf2-241d56e2d0ba" 00:08:58.073 ], 00:08:58.073 "product_name": "Malloc disk", 00:08:58.073 "block_size": 512, 00:08:58.073 "num_blocks": 65536, 00:08:58.073 "uuid": "f2316e1b-b205-44c4-aaf2-241d56e2d0ba", 00:08:58.073 "assigned_rate_limits": { 00:08:58.073 "rw_ios_per_sec": 0, 00:08:58.073 "rw_mbytes_per_sec": 0, 00:08:58.073 "r_mbytes_per_sec": 0, 00:08:58.073 "w_mbytes_per_sec": 0 00:08:58.073 }, 00:08:58.073 "claimed": true, 00:08:58.073 "claim_type": "exclusive_write", 00:08:58.073 "zoned": false, 00:08:58.073 "supported_io_types": { 00:08:58.073 "read": true, 00:08:58.073 "write": true, 00:08:58.073 "unmap": true, 00:08:58.073 "flush": true, 00:08:58.073 "reset": true, 00:08:58.073 "nvme_admin": false, 00:08:58.073 "nvme_io": false, 00:08:58.073 "nvme_io_md": false, 00:08:58.073 "write_zeroes": true, 00:08:58.073 "zcopy": true, 00:08:58.073 "get_zone_info": false, 00:08:58.073 "zone_management": false, 00:08:58.073 "zone_append": false, 00:08:58.073 "compare": false, 00:08:58.073 "compare_and_write": false, 00:08:58.073 "abort": true, 00:08:58.073 "seek_hole": false, 00:08:58.073 "seek_data": false, 00:08:58.073 "copy": true, 00:08:58.073 "nvme_iov_md": false 00:08:58.073 }, 00:08:58.073 "memory_domains": [ 00:08:58.073 { 00:08:58.073 "dma_device_id": "system", 00:08:58.073 "dma_device_type": 1 00:08:58.073 }, 00:08:58.073 { 00:08:58.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.073 "dma_device_type": 2 00:08:58.073 } 00:08:58.073 ], 00:08:58.073 "driver_specific": {} 00:08:58.073 } 00:08:58.073 ] 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.073 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.074 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.074 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.074 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.074 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.074 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.074 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.074 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.074 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.074 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.333 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.333 "name": "Existed_Raid", 00:08:58.333 "uuid": "e1ed9ad3-a4bb-42c6-a164-eb0f743030ac", 00:08:58.333 "strip_size_kb": 64, 00:08:58.333 "state": "online", 00:08:58.333 "raid_level": "concat", 00:08:58.333 "superblock": true, 00:08:58.333 "num_base_bdevs": 3, 00:08:58.333 "num_base_bdevs_discovered": 3, 00:08:58.333 "num_base_bdevs_operational": 3, 00:08:58.333 "base_bdevs_list": [ 00:08:58.333 { 00:08:58.333 "name": "BaseBdev1", 00:08:58.333 "uuid": "8917149e-3adf-463b-8d85-918320620d35", 00:08:58.333 "is_configured": true, 00:08:58.333 "data_offset": 2048, 00:08:58.333 "data_size": 63488 00:08:58.333 }, 00:08:58.333 { 00:08:58.333 "name": "BaseBdev2", 00:08:58.333 "uuid": "71290c95-b930-4ca0-af67-8f60fc70ee1a", 00:08:58.333 "is_configured": true, 00:08:58.333 "data_offset": 2048, 00:08:58.333 "data_size": 63488 00:08:58.333 }, 00:08:58.333 { 00:08:58.333 "name": "BaseBdev3", 00:08:58.333 "uuid": "f2316e1b-b205-44c4-aaf2-241d56e2d0ba", 00:08:58.333 "is_configured": true, 00:08:58.333 "data_offset": 2048, 00:08:58.333 "data_size": 63488 00:08:58.333 } 00:08:58.333 ] 00:08:58.333 }' 00:08:58.333 23:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.333 23:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.598 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:58.598 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:58.598 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.598 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.598 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.598 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.598 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:58.598 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.598 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.598 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.598 [2024-12-06 23:43:10.037283] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.598 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.598 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.598 "name": "Existed_Raid", 00:08:58.598 "aliases": [ 00:08:58.598 "e1ed9ad3-a4bb-42c6-a164-eb0f743030ac" 00:08:58.598 ], 00:08:58.598 "product_name": "Raid Volume", 00:08:58.598 "block_size": 512, 00:08:58.598 "num_blocks": 190464, 00:08:58.598 "uuid": "e1ed9ad3-a4bb-42c6-a164-eb0f743030ac", 00:08:58.598 "assigned_rate_limits": { 00:08:58.598 "rw_ios_per_sec": 0, 00:08:58.598 "rw_mbytes_per_sec": 0, 00:08:58.598 "r_mbytes_per_sec": 0, 00:08:58.598 "w_mbytes_per_sec": 0 00:08:58.598 }, 00:08:58.598 "claimed": false, 00:08:58.598 "zoned": false, 00:08:58.598 "supported_io_types": { 00:08:58.598 "read": true, 00:08:58.598 "write": true, 00:08:58.598 "unmap": true, 00:08:58.598 "flush": true, 00:08:58.598 "reset": true, 00:08:58.598 "nvme_admin": false, 00:08:58.598 "nvme_io": false, 00:08:58.598 "nvme_io_md": false, 00:08:58.598 "write_zeroes": true, 00:08:58.598 "zcopy": false, 00:08:58.598 "get_zone_info": false, 00:08:58.598 "zone_management": false, 00:08:58.598 "zone_append": false, 00:08:58.598 "compare": false, 00:08:58.598 "compare_and_write": false, 00:08:58.598 "abort": false, 00:08:58.598 "seek_hole": false, 00:08:58.598 "seek_data": false, 00:08:58.598 "copy": false, 00:08:58.598 "nvme_iov_md": false 00:08:58.598 }, 00:08:58.598 "memory_domains": [ 00:08:58.598 { 00:08:58.598 "dma_device_id": "system", 00:08:58.598 "dma_device_type": 1 00:08:58.598 }, 00:08:58.598 { 00:08:58.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.598 "dma_device_type": 2 00:08:58.598 }, 00:08:58.598 { 00:08:58.598 "dma_device_id": "system", 00:08:58.598 "dma_device_type": 1 00:08:58.599 }, 00:08:58.599 { 00:08:58.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.599 "dma_device_type": 2 00:08:58.599 }, 00:08:58.599 { 00:08:58.599 "dma_device_id": "system", 00:08:58.599 "dma_device_type": 1 00:08:58.599 }, 00:08:58.599 { 00:08:58.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.599 "dma_device_type": 2 00:08:58.599 } 00:08:58.599 ], 00:08:58.599 "driver_specific": { 00:08:58.599 "raid": { 00:08:58.599 "uuid": "e1ed9ad3-a4bb-42c6-a164-eb0f743030ac", 00:08:58.599 "strip_size_kb": 64, 00:08:58.599 "state": "online", 00:08:58.599 "raid_level": "concat", 00:08:58.599 "superblock": true, 00:08:58.599 "num_base_bdevs": 3, 00:08:58.599 "num_base_bdevs_discovered": 3, 00:08:58.599 "num_base_bdevs_operational": 3, 00:08:58.599 "base_bdevs_list": [ 00:08:58.599 { 00:08:58.599 "name": "BaseBdev1", 00:08:58.599 "uuid": "8917149e-3adf-463b-8d85-918320620d35", 00:08:58.599 "is_configured": true, 00:08:58.599 "data_offset": 2048, 00:08:58.599 "data_size": 63488 00:08:58.599 }, 00:08:58.599 { 00:08:58.599 "name": "BaseBdev2", 00:08:58.599 "uuid": "71290c95-b930-4ca0-af67-8f60fc70ee1a", 00:08:58.599 "is_configured": true, 00:08:58.599 "data_offset": 2048, 00:08:58.599 "data_size": 63488 00:08:58.599 }, 00:08:58.599 { 00:08:58.599 "name": "BaseBdev3", 00:08:58.599 "uuid": "f2316e1b-b205-44c4-aaf2-241d56e2d0ba", 00:08:58.599 "is_configured": true, 00:08:58.599 "data_offset": 2048, 00:08:58.599 "data_size": 63488 00:08:58.599 } 00:08:58.599 ] 00:08:58.599 } 00:08:58.599 } 00:08:58.599 }' 00:08:58.599 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.599 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:58.599 BaseBdev2 00:08:58.599 BaseBdev3' 00:08:58.599 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.867 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.867 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.867 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:58.867 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.867 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.867 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.867 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.867 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.867 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.867 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.867 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.868 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.868 [2024-12-06 23:43:10.320606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:58.868 [2024-12-06 23:43:10.320745] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.868 [2024-12-06 23:43:10.320847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.127 "name": "Existed_Raid", 00:08:59.127 "uuid": "e1ed9ad3-a4bb-42c6-a164-eb0f743030ac", 00:08:59.127 "strip_size_kb": 64, 00:08:59.127 "state": "offline", 00:08:59.127 "raid_level": "concat", 00:08:59.127 "superblock": true, 00:08:59.127 "num_base_bdevs": 3, 00:08:59.127 "num_base_bdevs_discovered": 2, 00:08:59.127 "num_base_bdevs_operational": 2, 00:08:59.127 "base_bdevs_list": [ 00:08:59.127 { 00:08:59.127 "name": null, 00:08:59.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.127 "is_configured": false, 00:08:59.127 "data_offset": 0, 00:08:59.127 "data_size": 63488 00:08:59.127 }, 00:08:59.127 { 00:08:59.127 "name": "BaseBdev2", 00:08:59.127 "uuid": "71290c95-b930-4ca0-af67-8f60fc70ee1a", 00:08:59.127 "is_configured": true, 00:08:59.127 "data_offset": 2048, 00:08:59.127 "data_size": 63488 00:08:59.127 }, 00:08:59.127 { 00:08:59.127 "name": "BaseBdev3", 00:08:59.127 "uuid": "f2316e1b-b205-44c4-aaf2-241d56e2d0ba", 00:08:59.127 "is_configured": true, 00:08:59.127 "data_offset": 2048, 00:08:59.127 "data_size": 63488 00:08:59.127 } 00:08:59.127 ] 00:08:59.127 }' 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.127 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.387 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:59.387 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.387 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.387 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.387 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.387 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:59.387 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.387 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:59.387 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:59.387 23:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:59.387 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.387 23:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.387 [2024-12-06 23:43:10.913567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.647 [2024-12-06 23:43:11.071239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:59.647 [2024-12-06 23:43:11.071325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:59.647 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.908 BaseBdev2 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.908 [ 00:08:59.908 { 00:08:59.908 "name": "BaseBdev2", 00:08:59.908 "aliases": [ 00:08:59.908 "8b8af42e-4b61-40ae-a156-6e523758ea34" 00:08:59.908 ], 00:08:59.908 "product_name": "Malloc disk", 00:08:59.908 "block_size": 512, 00:08:59.908 "num_blocks": 65536, 00:08:59.908 "uuid": "8b8af42e-4b61-40ae-a156-6e523758ea34", 00:08:59.908 "assigned_rate_limits": { 00:08:59.908 "rw_ios_per_sec": 0, 00:08:59.908 "rw_mbytes_per_sec": 0, 00:08:59.908 "r_mbytes_per_sec": 0, 00:08:59.908 "w_mbytes_per_sec": 0 00:08:59.908 }, 00:08:59.908 "claimed": false, 00:08:59.908 "zoned": false, 00:08:59.908 "supported_io_types": { 00:08:59.908 "read": true, 00:08:59.908 "write": true, 00:08:59.908 "unmap": true, 00:08:59.908 "flush": true, 00:08:59.908 "reset": true, 00:08:59.908 "nvme_admin": false, 00:08:59.908 "nvme_io": false, 00:08:59.908 "nvme_io_md": false, 00:08:59.908 "write_zeroes": true, 00:08:59.908 "zcopy": true, 00:08:59.908 "get_zone_info": false, 00:08:59.908 "zone_management": false, 00:08:59.908 "zone_append": false, 00:08:59.908 "compare": false, 00:08:59.908 "compare_and_write": false, 00:08:59.908 "abort": true, 00:08:59.908 "seek_hole": false, 00:08:59.908 "seek_data": false, 00:08:59.908 "copy": true, 00:08:59.908 "nvme_iov_md": false 00:08:59.908 }, 00:08:59.908 "memory_domains": [ 00:08:59.908 { 00:08:59.908 "dma_device_id": "system", 00:08:59.908 "dma_device_type": 1 00:08:59.908 }, 00:08:59.908 { 00:08:59.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.908 "dma_device_type": 2 00:08:59.908 } 00:08:59.908 ], 00:08:59.908 "driver_specific": {} 00:08:59.908 } 00:08:59.908 ] 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.908 BaseBdev3 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.908 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.909 [ 00:08:59.909 { 00:08:59.909 "name": "BaseBdev3", 00:08:59.909 "aliases": [ 00:08:59.909 "c8e5e9f8-eda9-46d8-829c-df1396c8dbcf" 00:08:59.909 ], 00:08:59.909 "product_name": "Malloc disk", 00:08:59.909 "block_size": 512, 00:08:59.909 "num_blocks": 65536, 00:08:59.909 "uuid": "c8e5e9f8-eda9-46d8-829c-df1396c8dbcf", 00:08:59.909 "assigned_rate_limits": { 00:08:59.909 "rw_ios_per_sec": 0, 00:08:59.909 "rw_mbytes_per_sec": 0, 00:08:59.909 "r_mbytes_per_sec": 0, 00:08:59.909 "w_mbytes_per_sec": 0 00:08:59.909 }, 00:08:59.909 "claimed": false, 00:08:59.909 "zoned": false, 00:08:59.909 "supported_io_types": { 00:08:59.909 "read": true, 00:08:59.909 "write": true, 00:08:59.909 "unmap": true, 00:08:59.909 "flush": true, 00:08:59.909 "reset": true, 00:08:59.909 "nvme_admin": false, 00:08:59.909 "nvme_io": false, 00:08:59.909 "nvme_io_md": false, 00:08:59.909 "write_zeroes": true, 00:08:59.909 "zcopy": true, 00:08:59.909 "get_zone_info": false, 00:08:59.909 "zone_management": false, 00:08:59.909 "zone_append": false, 00:08:59.909 "compare": false, 00:08:59.909 "compare_and_write": false, 00:08:59.909 "abort": true, 00:08:59.909 "seek_hole": false, 00:08:59.909 "seek_data": false, 00:08:59.909 "copy": true, 00:08:59.909 "nvme_iov_md": false 00:08:59.909 }, 00:08:59.909 "memory_domains": [ 00:08:59.909 { 00:08:59.909 "dma_device_id": "system", 00:08:59.909 "dma_device_type": 1 00:08:59.909 }, 00:08:59.909 { 00:08:59.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.909 "dma_device_type": 2 00:08:59.909 } 00:08:59.909 ], 00:08:59.909 "driver_specific": {} 00:08:59.909 } 00:08:59.909 ] 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.909 [2024-12-06 23:43:11.400813] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.909 [2024-12-06 23:43:11.400939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.909 [2024-12-06 23:43:11.400997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.909 [2024-12-06 23:43:11.403031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.909 "name": "Existed_Raid", 00:08:59.909 "uuid": "46b75685-ef5b-4243-80cb-2deff550c3f7", 00:08:59.909 "strip_size_kb": 64, 00:08:59.909 "state": "configuring", 00:08:59.909 "raid_level": "concat", 00:08:59.909 "superblock": true, 00:08:59.909 "num_base_bdevs": 3, 00:08:59.909 "num_base_bdevs_discovered": 2, 00:08:59.909 "num_base_bdevs_operational": 3, 00:08:59.909 "base_bdevs_list": [ 00:08:59.909 { 00:08:59.909 "name": "BaseBdev1", 00:08:59.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.909 "is_configured": false, 00:08:59.909 "data_offset": 0, 00:08:59.909 "data_size": 0 00:08:59.909 }, 00:08:59.909 { 00:08:59.909 "name": "BaseBdev2", 00:08:59.909 "uuid": "8b8af42e-4b61-40ae-a156-6e523758ea34", 00:08:59.909 "is_configured": true, 00:08:59.909 "data_offset": 2048, 00:08:59.909 "data_size": 63488 00:08:59.909 }, 00:08:59.909 { 00:08:59.909 "name": "BaseBdev3", 00:08:59.909 "uuid": "c8e5e9f8-eda9-46d8-829c-df1396c8dbcf", 00:08:59.909 "is_configured": true, 00:08:59.909 "data_offset": 2048, 00:08:59.909 "data_size": 63488 00:08:59.909 } 00:08:59.909 ] 00:08:59.909 }' 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.909 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.479 [2024-12-06 23:43:11.832087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.479 "name": "Existed_Raid", 00:09:00.479 "uuid": "46b75685-ef5b-4243-80cb-2deff550c3f7", 00:09:00.479 "strip_size_kb": 64, 00:09:00.479 "state": "configuring", 00:09:00.479 "raid_level": "concat", 00:09:00.479 "superblock": true, 00:09:00.479 "num_base_bdevs": 3, 00:09:00.479 "num_base_bdevs_discovered": 1, 00:09:00.479 "num_base_bdevs_operational": 3, 00:09:00.479 "base_bdevs_list": [ 00:09:00.479 { 00:09:00.479 "name": "BaseBdev1", 00:09:00.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.479 "is_configured": false, 00:09:00.479 "data_offset": 0, 00:09:00.479 "data_size": 0 00:09:00.479 }, 00:09:00.479 { 00:09:00.479 "name": null, 00:09:00.479 "uuid": "8b8af42e-4b61-40ae-a156-6e523758ea34", 00:09:00.479 "is_configured": false, 00:09:00.479 "data_offset": 0, 00:09:00.479 "data_size": 63488 00:09:00.479 }, 00:09:00.479 { 00:09:00.479 "name": "BaseBdev3", 00:09:00.479 "uuid": "c8e5e9f8-eda9-46d8-829c-df1396c8dbcf", 00:09:00.479 "is_configured": true, 00:09:00.479 "data_offset": 2048, 00:09:00.479 "data_size": 63488 00:09:00.479 } 00:09:00.479 ] 00:09:00.479 }' 00:09:00.479 23:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.480 23:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.739 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.739 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.739 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.739 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:00.739 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.739 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:00.739 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:00.739 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.739 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.999 [2024-12-06 23:43:12.329780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.999 BaseBdev1 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.999 [ 00:09:00.999 { 00:09:00.999 "name": "BaseBdev1", 00:09:00.999 "aliases": [ 00:09:00.999 "68c3177e-3df0-4f7b-bfe0-4fccd49449a1" 00:09:00.999 ], 00:09:00.999 "product_name": "Malloc disk", 00:09:00.999 "block_size": 512, 00:09:00.999 "num_blocks": 65536, 00:09:00.999 "uuid": "68c3177e-3df0-4f7b-bfe0-4fccd49449a1", 00:09:00.999 "assigned_rate_limits": { 00:09:00.999 "rw_ios_per_sec": 0, 00:09:00.999 "rw_mbytes_per_sec": 0, 00:09:00.999 "r_mbytes_per_sec": 0, 00:09:00.999 "w_mbytes_per_sec": 0 00:09:00.999 }, 00:09:00.999 "claimed": true, 00:09:00.999 "claim_type": "exclusive_write", 00:09:00.999 "zoned": false, 00:09:00.999 "supported_io_types": { 00:09:00.999 "read": true, 00:09:00.999 "write": true, 00:09:00.999 "unmap": true, 00:09:00.999 "flush": true, 00:09:00.999 "reset": true, 00:09:00.999 "nvme_admin": false, 00:09:00.999 "nvme_io": false, 00:09:00.999 "nvme_io_md": false, 00:09:00.999 "write_zeroes": true, 00:09:00.999 "zcopy": true, 00:09:00.999 "get_zone_info": false, 00:09:00.999 "zone_management": false, 00:09:00.999 "zone_append": false, 00:09:00.999 "compare": false, 00:09:00.999 "compare_and_write": false, 00:09:00.999 "abort": true, 00:09:00.999 "seek_hole": false, 00:09:00.999 "seek_data": false, 00:09:00.999 "copy": true, 00:09:00.999 "nvme_iov_md": false 00:09:00.999 }, 00:09:00.999 "memory_domains": [ 00:09:00.999 { 00:09:00.999 "dma_device_id": "system", 00:09:00.999 "dma_device_type": 1 00:09:00.999 }, 00:09:00.999 { 00:09:00.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.999 "dma_device_type": 2 00:09:00.999 } 00:09:00.999 ], 00:09:00.999 "driver_specific": {} 00:09:00.999 } 00:09:00.999 ] 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.999 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.999 "name": "Existed_Raid", 00:09:00.999 "uuid": "46b75685-ef5b-4243-80cb-2deff550c3f7", 00:09:00.999 "strip_size_kb": 64, 00:09:00.999 "state": "configuring", 00:09:00.999 "raid_level": "concat", 00:09:00.999 "superblock": true, 00:09:00.999 "num_base_bdevs": 3, 00:09:00.999 "num_base_bdevs_discovered": 2, 00:09:00.999 "num_base_bdevs_operational": 3, 00:09:00.999 "base_bdevs_list": [ 00:09:00.999 { 00:09:00.999 "name": "BaseBdev1", 00:09:00.999 "uuid": "68c3177e-3df0-4f7b-bfe0-4fccd49449a1", 00:09:00.999 "is_configured": true, 00:09:00.999 "data_offset": 2048, 00:09:01.000 "data_size": 63488 00:09:01.000 }, 00:09:01.000 { 00:09:01.000 "name": null, 00:09:01.000 "uuid": "8b8af42e-4b61-40ae-a156-6e523758ea34", 00:09:01.000 "is_configured": false, 00:09:01.000 "data_offset": 0, 00:09:01.000 "data_size": 63488 00:09:01.000 }, 00:09:01.000 { 00:09:01.000 "name": "BaseBdev3", 00:09:01.000 "uuid": "c8e5e9f8-eda9-46d8-829c-df1396c8dbcf", 00:09:01.000 "is_configured": true, 00:09:01.000 "data_offset": 2048, 00:09:01.000 "data_size": 63488 00:09:01.000 } 00:09:01.000 ] 00:09:01.000 }' 00:09:01.000 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.000 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.259 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.259 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:01.259 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.259 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.519 [2024-12-06 23:43:12.864960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.519 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.519 "name": "Existed_Raid", 00:09:01.519 "uuid": "46b75685-ef5b-4243-80cb-2deff550c3f7", 00:09:01.519 "strip_size_kb": 64, 00:09:01.519 "state": "configuring", 00:09:01.519 "raid_level": "concat", 00:09:01.519 "superblock": true, 00:09:01.519 "num_base_bdevs": 3, 00:09:01.519 "num_base_bdevs_discovered": 1, 00:09:01.519 "num_base_bdevs_operational": 3, 00:09:01.519 "base_bdevs_list": [ 00:09:01.519 { 00:09:01.519 "name": "BaseBdev1", 00:09:01.519 "uuid": "68c3177e-3df0-4f7b-bfe0-4fccd49449a1", 00:09:01.519 "is_configured": true, 00:09:01.519 "data_offset": 2048, 00:09:01.519 "data_size": 63488 00:09:01.519 }, 00:09:01.519 { 00:09:01.519 "name": null, 00:09:01.519 "uuid": "8b8af42e-4b61-40ae-a156-6e523758ea34", 00:09:01.519 "is_configured": false, 00:09:01.519 "data_offset": 0, 00:09:01.519 "data_size": 63488 00:09:01.519 }, 00:09:01.519 { 00:09:01.519 "name": null, 00:09:01.520 "uuid": "c8e5e9f8-eda9-46d8-829c-df1396c8dbcf", 00:09:01.520 "is_configured": false, 00:09:01.520 "data_offset": 0, 00:09:01.520 "data_size": 63488 00:09:01.520 } 00:09:01.520 ] 00:09:01.520 }' 00:09:01.520 23:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.520 23:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.779 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.779 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.779 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.779 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:01.779 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.779 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:01.779 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:01.779 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.779 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.779 [2024-12-06 23:43:13.340257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.039 "name": "Existed_Raid", 00:09:02.039 "uuid": "46b75685-ef5b-4243-80cb-2deff550c3f7", 00:09:02.039 "strip_size_kb": 64, 00:09:02.039 "state": "configuring", 00:09:02.039 "raid_level": "concat", 00:09:02.039 "superblock": true, 00:09:02.039 "num_base_bdevs": 3, 00:09:02.039 "num_base_bdevs_discovered": 2, 00:09:02.039 "num_base_bdevs_operational": 3, 00:09:02.039 "base_bdevs_list": [ 00:09:02.039 { 00:09:02.039 "name": "BaseBdev1", 00:09:02.039 "uuid": "68c3177e-3df0-4f7b-bfe0-4fccd49449a1", 00:09:02.039 "is_configured": true, 00:09:02.039 "data_offset": 2048, 00:09:02.039 "data_size": 63488 00:09:02.039 }, 00:09:02.039 { 00:09:02.039 "name": null, 00:09:02.039 "uuid": "8b8af42e-4b61-40ae-a156-6e523758ea34", 00:09:02.039 "is_configured": false, 00:09:02.039 "data_offset": 0, 00:09:02.039 "data_size": 63488 00:09:02.039 }, 00:09:02.039 { 00:09:02.039 "name": "BaseBdev3", 00:09:02.039 "uuid": "c8e5e9f8-eda9-46d8-829c-df1396c8dbcf", 00:09:02.039 "is_configured": true, 00:09:02.039 "data_offset": 2048, 00:09:02.039 "data_size": 63488 00:09:02.039 } 00:09:02.039 ] 00:09:02.039 }' 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.039 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.299 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.299 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.299 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.299 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:02.299 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.299 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:02.299 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:02.299 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.299 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.299 [2024-12-06 23:43:13.787516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.558 "name": "Existed_Raid", 00:09:02.558 "uuid": "46b75685-ef5b-4243-80cb-2deff550c3f7", 00:09:02.558 "strip_size_kb": 64, 00:09:02.558 "state": "configuring", 00:09:02.558 "raid_level": "concat", 00:09:02.558 "superblock": true, 00:09:02.558 "num_base_bdevs": 3, 00:09:02.558 "num_base_bdevs_discovered": 1, 00:09:02.558 "num_base_bdevs_operational": 3, 00:09:02.558 "base_bdevs_list": [ 00:09:02.558 { 00:09:02.558 "name": null, 00:09:02.558 "uuid": "68c3177e-3df0-4f7b-bfe0-4fccd49449a1", 00:09:02.558 "is_configured": false, 00:09:02.558 "data_offset": 0, 00:09:02.558 "data_size": 63488 00:09:02.558 }, 00:09:02.558 { 00:09:02.558 "name": null, 00:09:02.558 "uuid": "8b8af42e-4b61-40ae-a156-6e523758ea34", 00:09:02.558 "is_configured": false, 00:09:02.558 "data_offset": 0, 00:09:02.558 "data_size": 63488 00:09:02.558 }, 00:09:02.558 { 00:09:02.558 "name": "BaseBdev3", 00:09:02.558 "uuid": "c8e5e9f8-eda9-46d8-829c-df1396c8dbcf", 00:09:02.558 "is_configured": true, 00:09:02.558 "data_offset": 2048, 00:09:02.558 "data_size": 63488 00:09:02.558 } 00:09:02.558 ] 00:09:02.558 }' 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.558 23:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.817 [2024-12-06 23:43:14.343792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.817 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.076 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.076 "name": "Existed_Raid", 00:09:03.076 "uuid": "46b75685-ef5b-4243-80cb-2deff550c3f7", 00:09:03.076 "strip_size_kb": 64, 00:09:03.076 "state": "configuring", 00:09:03.076 "raid_level": "concat", 00:09:03.076 "superblock": true, 00:09:03.076 "num_base_bdevs": 3, 00:09:03.076 "num_base_bdevs_discovered": 2, 00:09:03.076 "num_base_bdevs_operational": 3, 00:09:03.076 "base_bdevs_list": [ 00:09:03.076 { 00:09:03.076 "name": null, 00:09:03.076 "uuid": "68c3177e-3df0-4f7b-bfe0-4fccd49449a1", 00:09:03.076 "is_configured": false, 00:09:03.076 "data_offset": 0, 00:09:03.076 "data_size": 63488 00:09:03.076 }, 00:09:03.076 { 00:09:03.076 "name": "BaseBdev2", 00:09:03.076 "uuid": "8b8af42e-4b61-40ae-a156-6e523758ea34", 00:09:03.076 "is_configured": true, 00:09:03.076 "data_offset": 2048, 00:09:03.076 "data_size": 63488 00:09:03.076 }, 00:09:03.076 { 00:09:03.076 "name": "BaseBdev3", 00:09:03.076 "uuid": "c8e5e9f8-eda9-46d8-829c-df1396c8dbcf", 00:09:03.076 "is_configured": true, 00:09:03.076 "data_offset": 2048, 00:09:03.076 "data_size": 63488 00:09:03.076 } 00:09:03.076 ] 00:09:03.076 }' 00:09:03.076 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.076 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 68c3177e-3df0-4f7b-bfe0-4fccd49449a1 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.335 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.594 [2024-12-06 23:43:14.917591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:03.594 [2024-12-06 23:43:14.917961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:03.594 [2024-12-06 23:43:14.918003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:03.594 [2024-12-06 23:43:14.918295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:03.594 [2024-12-06 23:43:14.918483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:03.594 [2024-12-06 23:43:14.918523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:03.594 [2024-12-06 23:43:14.918713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.594 NewBaseBdev 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.594 [ 00:09:03.594 { 00:09:03.594 "name": "NewBaseBdev", 00:09:03.594 "aliases": [ 00:09:03.594 "68c3177e-3df0-4f7b-bfe0-4fccd49449a1" 00:09:03.594 ], 00:09:03.594 "product_name": "Malloc disk", 00:09:03.594 "block_size": 512, 00:09:03.594 "num_blocks": 65536, 00:09:03.594 "uuid": "68c3177e-3df0-4f7b-bfe0-4fccd49449a1", 00:09:03.594 "assigned_rate_limits": { 00:09:03.594 "rw_ios_per_sec": 0, 00:09:03.594 "rw_mbytes_per_sec": 0, 00:09:03.594 "r_mbytes_per_sec": 0, 00:09:03.594 "w_mbytes_per_sec": 0 00:09:03.594 }, 00:09:03.594 "claimed": true, 00:09:03.594 "claim_type": "exclusive_write", 00:09:03.594 "zoned": false, 00:09:03.594 "supported_io_types": { 00:09:03.594 "read": true, 00:09:03.594 "write": true, 00:09:03.594 "unmap": true, 00:09:03.594 "flush": true, 00:09:03.594 "reset": true, 00:09:03.594 "nvme_admin": false, 00:09:03.594 "nvme_io": false, 00:09:03.594 "nvme_io_md": false, 00:09:03.594 "write_zeroes": true, 00:09:03.594 "zcopy": true, 00:09:03.594 "get_zone_info": false, 00:09:03.594 "zone_management": false, 00:09:03.594 "zone_append": false, 00:09:03.594 "compare": false, 00:09:03.594 "compare_and_write": false, 00:09:03.594 "abort": true, 00:09:03.594 "seek_hole": false, 00:09:03.594 "seek_data": false, 00:09:03.594 "copy": true, 00:09:03.594 "nvme_iov_md": false 00:09:03.594 }, 00:09:03.594 "memory_domains": [ 00:09:03.594 { 00:09:03.594 "dma_device_id": "system", 00:09:03.594 "dma_device_type": 1 00:09:03.594 }, 00:09:03.594 { 00:09:03.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.594 "dma_device_type": 2 00:09:03.594 } 00:09:03.594 ], 00:09:03.594 "driver_specific": {} 00:09:03.594 } 00:09:03.594 ] 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.594 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.595 "name": "Existed_Raid", 00:09:03.595 "uuid": "46b75685-ef5b-4243-80cb-2deff550c3f7", 00:09:03.595 "strip_size_kb": 64, 00:09:03.595 "state": "online", 00:09:03.595 "raid_level": "concat", 00:09:03.595 "superblock": true, 00:09:03.595 "num_base_bdevs": 3, 00:09:03.595 "num_base_bdevs_discovered": 3, 00:09:03.595 "num_base_bdevs_operational": 3, 00:09:03.595 "base_bdevs_list": [ 00:09:03.595 { 00:09:03.595 "name": "NewBaseBdev", 00:09:03.595 "uuid": "68c3177e-3df0-4f7b-bfe0-4fccd49449a1", 00:09:03.595 "is_configured": true, 00:09:03.595 "data_offset": 2048, 00:09:03.595 "data_size": 63488 00:09:03.595 }, 00:09:03.595 { 00:09:03.595 "name": "BaseBdev2", 00:09:03.595 "uuid": "8b8af42e-4b61-40ae-a156-6e523758ea34", 00:09:03.595 "is_configured": true, 00:09:03.595 "data_offset": 2048, 00:09:03.595 "data_size": 63488 00:09:03.595 }, 00:09:03.595 { 00:09:03.595 "name": "BaseBdev3", 00:09:03.595 "uuid": "c8e5e9f8-eda9-46d8-829c-df1396c8dbcf", 00:09:03.595 "is_configured": true, 00:09:03.595 "data_offset": 2048, 00:09:03.595 "data_size": 63488 00:09:03.595 } 00:09:03.595 ] 00:09:03.595 }' 00:09:03.595 23:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.595 23:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.853 [2024-12-06 23:43:15.353125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.853 "name": "Existed_Raid", 00:09:03.853 "aliases": [ 00:09:03.853 "46b75685-ef5b-4243-80cb-2deff550c3f7" 00:09:03.853 ], 00:09:03.853 "product_name": "Raid Volume", 00:09:03.853 "block_size": 512, 00:09:03.853 "num_blocks": 190464, 00:09:03.853 "uuid": "46b75685-ef5b-4243-80cb-2deff550c3f7", 00:09:03.853 "assigned_rate_limits": { 00:09:03.853 "rw_ios_per_sec": 0, 00:09:03.853 "rw_mbytes_per_sec": 0, 00:09:03.853 "r_mbytes_per_sec": 0, 00:09:03.853 "w_mbytes_per_sec": 0 00:09:03.853 }, 00:09:03.853 "claimed": false, 00:09:03.853 "zoned": false, 00:09:03.853 "supported_io_types": { 00:09:03.853 "read": true, 00:09:03.853 "write": true, 00:09:03.853 "unmap": true, 00:09:03.853 "flush": true, 00:09:03.853 "reset": true, 00:09:03.853 "nvme_admin": false, 00:09:03.853 "nvme_io": false, 00:09:03.853 "nvme_io_md": false, 00:09:03.853 "write_zeroes": true, 00:09:03.853 "zcopy": false, 00:09:03.853 "get_zone_info": false, 00:09:03.853 "zone_management": false, 00:09:03.853 "zone_append": false, 00:09:03.853 "compare": false, 00:09:03.853 "compare_and_write": false, 00:09:03.853 "abort": false, 00:09:03.853 "seek_hole": false, 00:09:03.853 "seek_data": false, 00:09:03.853 "copy": false, 00:09:03.853 "nvme_iov_md": false 00:09:03.853 }, 00:09:03.853 "memory_domains": [ 00:09:03.853 { 00:09:03.853 "dma_device_id": "system", 00:09:03.853 "dma_device_type": 1 00:09:03.853 }, 00:09:03.853 { 00:09:03.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.853 "dma_device_type": 2 00:09:03.853 }, 00:09:03.853 { 00:09:03.853 "dma_device_id": "system", 00:09:03.853 "dma_device_type": 1 00:09:03.853 }, 00:09:03.853 { 00:09:03.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.853 "dma_device_type": 2 00:09:03.853 }, 00:09:03.853 { 00:09:03.853 "dma_device_id": "system", 00:09:03.853 "dma_device_type": 1 00:09:03.853 }, 00:09:03.853 { 00:09:03.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.853 "dma_device_type": 2 00:09:03.853 } 00:09:03.853 ], 00:09:03.853 "driver_specific": { 00:09:03.853 "raid": { 00:09:03.853 "uuid": "46b75685-ef5b-4243-80cb-2deff550c3f7", 00:09:03.853 "strip_size_kb": 64, 00:09:03.853 "state": "online", 00:09:03.853 "raid_level": "concat", 00:09:03.853 "superblock": true, 00:09:03.853 "num_base_bdevs": 3, 00:09:03.853 "num_base_bdevs_discovered": 3, 00:09:03.853 "num_base_bdevs_operational": 3, 00:09:03.853 "base_bdevs_list": [ 00:09:03.853 { 00:09:03.853 "name": "NewBaseBdev", 00:09:03.853 "uuid": "68c3177e-3df0-4f7b-bfe0-4fccd49449a1", 00:09:03.853 "is_configured": true, 00:09:03.853 "data_offset": 2048, 00:09:03.853 "data_size": 63488 00:09:03.853 }, 00:09:03.853 { 00:09:03.853 "name": "BaseBdev2", 00:09:03.853 "uuid": "8b8af42e-4b61-40ae-a156-6e523758ea34", 00:09:03.853 "is_configured": true, 00:09:03.853 "data_offset": 2048, 00:09:03.853 "data_size": 63488 00:09:03.853 }, 00:09:03.853 { 00:09:03.853 "name": "BaseBdev3", 00:09:03.853 "uuid": "c8e5e9f8-eda9-46d8-829c-df1396c8dbcf", 00:09:03.853 "is_configured": true, 00:09:03.853 "data_offset": 2048, 00:09:03.853 "data_size": 63488 00:09:03.853 } 00:09:03.853 ] 00:09:03.853 } 00:09:03.853 } 00:09:03.853 }' 00:09:03.853 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:04.111 BaseBdev2 00:09:04.111 BaseBdev3' 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.111 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.112 [2024-12-06 23:43:15.620445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.112 [2024-12-06 23:43:15.620490] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.112 [2024-12-06 23:43:15.620578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.112 [2024-12-06 23:43:15.620641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.112 [2024-12-06 23:43:15.620655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66142 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66142 ']' 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66142 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66142 00:09:04.112 killing process with pid 66142 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66142' 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66142 00:09:04.112 [2024-12-06 23:43:15.670505] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.112 23:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66142 00:09:04.679 [2024-12-06 23:43:15.989175] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.055 ************************************ 00:09:06.055 END TEST raid_state_function_test_sb 00:09:06.055 ************************************ 00:09:06.055 23:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:06.055 00:09:06.055 real 0m10.575s 00:09:06.055 user 0m16.549s 00:09:06.055 sys 0m1.903s 00:09:06.056 23:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.056 23:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.056 23:43:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:06.056 23:43:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:06.056 23:43:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.056 23:43:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.056 ************************************ 00:09:06.056 START TEST raid_superblock_test 00:09:06.056 ************************************ 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66762 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66762 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66762 ']' 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.056 23:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.056 [2024-12-06 23:43:17.368532] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:09:06.056 [2024-12-06 23:43:17.368768] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66762 ] 00:09:06.056 [2024-12-06 23:43:17.548957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.315 [2024-12-06 23:43:17.684656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.575 [2024-12-06 23:43:17.925082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.575 [2024-12-06 23:43:17.925153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.835 malloc1 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.835 [2024-12-06 23:43:18.246368] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:06.835 [2024-12-06 23:43:18.246513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.835 [2024-12-06 23:43:18.246553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:06.835 [2024-12-06 23:43:18.246582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.835 [2024-12-06 23:43:18.249048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.835 [2024-12-06 23:43:18.249120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:06.835 pt1 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.835 malloc2 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.835 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.835 [2024-12-06 23:43:18.311220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:06.835 [2024-12-06 23:43:18.311280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.835 [2024-12-06 23:43:18.311308] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:06.835 [2024-12-06 23:43:18.311317] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.835 [2024-12-06 23:43:18.313619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.835 [2024-12-06 23:43:18.313653] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:06.835 pt2 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.836 malloc3 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.836 [2024-12-06 23:43:18.383397] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:06.836 [2024-12-06 23:43:18.383526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.836 [2024-12-06 23:43:18.383567] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:06.836 [2024-12-06 23:43:18.383601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.836 [2024-12-06 23:43:18.385960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.836 [2024-12-06 23:43:18.386028] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:06.836 pt3 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.836 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.836 [2024-12-06 23:43:18.395440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:07.098 [2024-12-06 23:43:18.397541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.098 [2024-12-06 23:43:18.397657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:07.098 [2024-12-06 23:43:18.397854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:07.098 [2024-12-06 23:43:18.397869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:07.098 [2024-12-06 23:43:18.398116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:07.098 [2024-12-06 23:43:18.398284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:07.098 [2024-12-06 23:43:18.398292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:07.098 [2024-12-06 23:43:18.398445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.098 "name": "raid_bdev1", 00:09:07.098 "uuid": "5253082c-7d97-428b-a5ea-0ed198e8254e", 00:09:07.098 "strip_size_kb": 64, 00:09:07.098 "state": "online", 00:09:07.098 "raid_level": "concat", 00:09:07.098 "superblock": true, 00:09:07.098 "num_base_bdevs": 3, 00:09:07.098 "num_base_bdevs_discovered": 3, 00:09:07.098 "num_base_bdevs_operational": 3, 00:09:07.098 "base_bdevs_list": [ 00:09:07.098 { 00:09:07.098 "name": "pt1", 00:09:07.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.098 "is_configured": true, 00:09:07.098 "data_offset": 2048, 00:09:07.098 "data_size": 63488 00:09:07.098 }, 00:09:07.098 { 00:09:07.098 "name": "pt2", 00:09:07.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.098 "is_configured": true, 00:09:07.098 "data_offset": 2048, 00:09:07.098 "data_size": 63488 00:09:07.098 }, 00:09:07.098 { 00:09:07.098 "name": "pt3", 00:09:07.098 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.098 "is_configured": true, 00:09:07.098 "data_offset": 2048, 00:09:07.098 "data_size": 63488 00:09:07.098 } 00:09:07.098 ] 00:09:07.098 }' 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.098 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.364 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:07.364 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:07.364 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.364 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.364 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.364 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.364 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:07.364 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.364 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.364 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.364 [2024-12-06 23:43:18.831161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.364 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.364 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.364 "name": "raid_bdev1", 00:09:07.364 "aliases": [ 00:09:07.364 "5253082c-7d97-428b-a5ea-0ed198e8254e" 00:09:07.364 ], 00:09:07.364 "product_name": "Raid Volume", 00:09:07.364 "block_size": 512, 00:09:07.364 "num_blocks": 190464, 00:09:07.364 "uuid": "5253082c-7d97-428b-a5ea-0ed198e8254e", 00:09:07.364 "assigned_rate_limits": { 00:09:07.364 "rw_ios_per_sec": 0, 00:09:07.364 "rw_mbytes_per_sec": 0, 00:09:07.364 "r_mbytes_per_sec": 0, 00:09:07.364 "w_mbytes_per_sec": 0 00:09:07.364 }, 00:09:07.364 "claimed": false, 00:09:07.364 "zoned": false, 00:09:07.364 "supported_io_types": { 00:09:07.364 "read": true, 00:09:07.364 "write": true, 00:09:07.364 "unmap": true, 00:09:07.364 "flush": true, 00:09:07.364 "reset": true, 00:09:07.364 "nvme_admin": false, 00:09:07.364 "nvme_io": false, 00:09:07.364 "nvme_io_md": false, 00:09:07.364 "write_zeroes": true, 00:09:07.364 "zcopy": false, 00:09:07.364 "get_zone_info": false, 00:09:07.364 "zone_management": false, 00:09:07.364 "zone_append": false, 00:09:07.364 "compare": false, 00:09:07.364 "compare_and_write": false, 00:09:07.364 "abort": false, 00:09:07.364 "seek_hole": false, 00:09:07.364 "seek_data": false, 00:09:07.364 "copy": false, 00:09:07.364 "nvme_iov_md": false 00:09:07.364 }, 00:09:07.364 "memory_domains": [ 00:09:07.364 { 00:09:07.364 "dma_device_id": "system", 00:09:07.364 "dma_device_type": 1 00:09:07.364 }, 00:09:07.364 { 00:09:07.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.364 "dma_device_type": 2 00:09:07.364 }, 00:09:07.364 { 00:09:07.364 "dma_device_id": "system", 00:09:07.364 "dma_device_type": 1 00:09:07.364 }, 00:09:07.364 { 00:09:07.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.364 "dma_device_type": 2 00:09:07.364 }, 00:09:07.364 { 00:09:07.364 "dma_device_id": "system", 00:09:07.364 "dma_device_type": 1 00:09:07.364 }, 00:09:07.364 { 00:09:07.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.364 "dma_device_type": 2 00:09:07.364 } 00:09:07.364 ], 00:09:07.364 "driver_specific": { 00:09:07.364 "raid": { 00:09:07.364 "uuid": "5253082c-7d97-428b-a5ea-0ed198e8254e", 00:09:07.364 "strip_size_kb": 64, 00:09:07.364 "state": "online", 00:09:07.364 "raid_level": "concat", 00:09:07.364 "superblock": true, 00:09:07.364 "num_base_bdevs": 3, 00:09:07.364 "num_base_bdevs_discovered": 3, 00:09:07.364 "num_base_bdevs_operational": 3, 00:09:07.364 "base_bdevs_list": [ 00:09:07.364 { 00:09:07.364 "name": "pt1", 00:09:07.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.364 "is_configured": true, 00:09:07.364 "data_offset": 2048, 00:09:07.365 "data_size": 63488 00:09:07.365 }, 00:09:07.365 { 00:09:07.365 "name": "pt2", 00:09:07.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.365 "is_configured": true, 00:09:07.365 "data_offset": 2048, 00:09:07.365 "data_size": 63488 00:09:07.365 }, 00:09:07.365 { 00:09:07.365 "name": "pt3", 00:09:07.365 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.365 "is_configured": true, 00:09:07.365 "data_offset": 2048, 00:09:07.365 "data_size": 63488 00:09:07.365 } 00:09:07.365 ] 00:09:07.365 } 00:09:07.365 } 00:09:07.365 }' 00:09:07.365 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.365 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:07.365 pt2 00:09:07.365 pt3' 00:09:07.365 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.625 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.625 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.625 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.625 23:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:07.625 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.625 23:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.625 [2024-12-06 23:43:19.134535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5253082c-7d97-428b-a5ea-0ed198e8254e 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5253082c-7d97-428b-a5ea-0ed198e8254e ']' 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.625 [2024-12-06 23:43:19.178209] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.625 [2024-12-06 23:43:19.178290] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.625 [2024-12-06 23:43:19.178386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.625 [2024-12-06 23:43:19.178460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.625 [2024-12-06 23:43:19.178469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:07.625 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.885 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.885 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:07.885 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.885 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.885 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:07.885 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:07.885 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:07.885 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:07.885 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 [2024-12-06 23:43:19.337958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:07.886 [2024-12-06 23:43:19.340160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:07.886 [2024-12-06 23:43:19.340204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:07.886 [2024-12-06 23:43:19.340258] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:07.886 [2024-12-06 23:43:19.340307] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:07.886 [2024-12-06 23:43:19.340324] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:07.886 [2024-12-06 23:43:19.340341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.886 [2024-12-06 23:43:19.340349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:07.886 request: 00:09:07.886 { 00:09:07.886 "name": "raid_bdev1", 00:09:07.886 "raid_level": "concat", 00:09:07.886 "base_bdevs": [ 00:09:07.886 "malloc1", 00:09:07.886 "malloc2", 00:09:07.886 "malloc3" 00:09:07.886 ], 00:09:07.886 "strip_size_kb": 64, 00:09:07.886 "superblock": false, 00:09:07.886 "method": "bdev_raid_create", 00:09:07.886 "req_id": 1 00:09:07.886 } 00:09:07.886 Got JSON-RPC error response 00:09:07.886 response: 00:09:07.886 { 00:09:07.886 "code": -17, 00:09:07.886 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:07.886 } 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 [2024-12-06 23:43:19.405789] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:07.886 [2024-12-06 23:43:19.405879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.886 [2024-12-06 23:43:19.405912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:07.886 [2024-12-06 23:43:19.405938] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.886 [2024-12-06 23:43:19.408488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.886 [2024-12-06 23:43:19.408557] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:07.886 [2024-12-06 23:43:19.408654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:07.886 [2024-12-06 23:43:19.408737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:07.886 pt1 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.146 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.146 "name": "raid_bdev1", 00:09:08.146 "uuid": "5253082c-7d97-428b-a5ea-0ed198e8254e", 00:09:08.146 "strip_size_kb": 64, 00:09:08.146 "state": "configuring", 00:09:08.146 "raid_level": "concat", 00:09:08.146 "superblock": true, 00:09:08.146 "num_base_bdevs": 3, 00:09:08.146 "num_base_bdevs_discovered": 1, 00:09:08.146 "num_base_bdevs_operational": 3, 00:09:08.146 "base_bdevs_list": [ 00:09:08.146 { 00:09:08.146 "name": "pt1", 00:09:08.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.146 "is_configured": true, 00:09:08.146 "data_offset": 2048, 00:09:08.146 "data_size": 63488 00:09:08.146 }, 00:09:08.146 { 00:09:08.146 "name": null, 00:09:08.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.146 "is_configured": false, 00:09:08.146 "data_offset": 2048, 00:09:08.146 "data_size": 63488 00:09:08.146 }, 00:09:08.146 { 00:09:08.146 "name": null, 00:09:08.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:08.146 "is_configured": false, 00:09:08.146 "data_offset": 2048, 00:09:08.146 "data_size": 63488 00:09:08.146 } 00:09:08.146 ] 00:09:08.146 }' 00:09:08.146 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.146 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.405 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:08.405 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.406 [2024-12-06 23:43:19.789315] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:08.406 [2024-12-06 23:43:19.789448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.406 [2024-12-06 23:43:19.789495] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:08.406 [2024-12-06 23:43:19.789525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.406 [2024-12-06 23:43:19.790008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.406 [2024-12-06 23:43:19.790065] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:08.406 [2024-12-06 23:43:19.790175] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:08.406 [2024-12-06 23:43:19.790232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:08.406 pt2 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.406 [2024-12-06 23:43:19.801307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.406 "name": "raid_bdev1", 00:09:08.406 "uuid": "5253082c-7d97-428b-a5ea-0ed198e8254e", 00:09:08.406 "strip_size_kb": 64, 00:09:08.406 "state": "configuring", 00:09:08.406 "raid_level": "concat", 00:09:08.406 "superblock": true, 00:09:08.406 "num_base_bdevs": 3, 00:09:08.406 "num_base_bdevs_discovered": 1, 00:09:08.406 "num_base_bdevs_operational": 3, 00:09:08.406 "base_bdevs_list": [ 00:09:08.406 { 00:09:08.406 "name": "pt1", 00:09:08.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.406 "is_configured": true, 00:09:08.406 "data_offset": 2048, 00:09:08.406 "data_size": 63488 00:09:08.406 }, 00:09:08.406 { 00:09:08.406 "name": null, 00:09:08.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.406 "is_configured": false, 00:09:08.406 "data_offset": 0, 00:09:08.406 "data_size": 63488 00:09:08.406 }, 00:09:08.406 { 00:09:08.406 "name": null, 00:09:08.406 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:08.406 "is_configured": false, 00:09:08.406 "data_offset": 2048, 00:09:08.406 "data_size": 63488 00:09:08.406 } 00:09:08.406 ] 00:09:08.406 }' 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.406 23:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.972 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:08.972 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:08.972 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:08.972 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.972 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.972 [2024-12-06 23:43:20.252784] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:08.972 [2024-12-06 23:43:20.252954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.972 [2024-12-06 23:43:20.252979] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:08.972 [2024-12-06 23:43:20.252991] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.972 [2024-12-06 23:43:20.253548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.972 [2024-12-06 23:43:20.253570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:08.972 [2024-12-06 23:43:20.253666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:08.972 [2024-12-06 23:43:20.253708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:08.972 pt2 00:09:08.972 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.972 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:08.972 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:08.972 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:08.972 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.972 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.972 [2024-12-06 23:43:20.264715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:08.972 [2024-12-06 23:43:20.264765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.973 [2024-12-06 23:43:20.264781] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:08.973 [2024-12-06 23:43:20.264792] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.973 [2024-12-06 23:43:20.265175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.973 [2024-12-06 23:43:20.265213] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:08.973 [2024-12-06 23:43:20.265273] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:08.973 [2024-12-06 23:43:20.265293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:08.973 [2024-12-06 23:43:20.265418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:08.973 [2024-12-06 23:43:20.265430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:08.973 [2024-12-06 23:43:20.265710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:08.973 [2024-12-06 23:43:20.265871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:08.973 [2024-12-06 23:43:20.265880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:08.973 [2024-12-06 23:43:20.266025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.973 pt3 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.973 "name": "raid_bdev1", 00:09:08.973 "uuid": "5253082c-7d97-428b-a5ea-0ed198e8254e", 00:09:08.973 "strip_size_kb": 64, 00:09:08.973 "state": "online", 00:09:08.973 "raid_level": "concat", 00:09:08.973 "superblock": true, 00:09:08.973 "num_base_bdevs": 3, 00:09:08.973 "num_base_bdevs_discovered": 3, 00:09:08.973 "num_base_bdevs_operational": 3, 00:09:08.973 "base_bdevs_list": [ 00:09:08.973 { 00:09:08.973 "name": "pt1", 00:09:08.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.973 "is_configured": true, 00:09:08.973 "data_offset": 2048, 00:09:08.973 "data_size": 63488 00:09:08.973 }, 00:09:08.973 { 00:09:08.973 "name": "pt2", 00:09:08.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.973 "is_configured": true, 00:09:08.973 "data_offset": 2048, 00:09:08.973 "data_size": 63488 00:09:08.973 }, 00:09:08.973 { 00:09:08.973 "name": "pt3", 00:09:08.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:08.973 "is_configured": true, 00:09:08.973 "data_offset": 2048, 00:09:08.973 "data_size": 63488 00:09:08.973 } 00:09:08.973 ] 00:09:08.973 }' 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.973 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.233 [2024-12-06 23:43:20.672344] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.233 "name": "raid_bdev1", 00:09:09.233 "aliases": [ 00:09:09.233 "5253082c-7d97-428b-a5ea-0ed198e8254e" 00:09:09.233 ], 00:09:09.233 "product_name": "Raid Volume", 00:09:09.233 "block_size": 512, 00:09:09.233 "num_blocks": 190464, 00:09:09.233 "uuid": "5253082c-7d97-428b-a5ea-0ed198e8254e", 00:09:09.233 "assigned_rate_limits": { 00:09:09.233 "rw_ios_per_sec": 0, 00:09:09.233 "rw_mbytes_per_sec": 0, 00:09:09.233 "r_mbytes_per_sec": 0, 00:09:09.233 "w_mbytes_per_sec": 0 00:09:09.233 }, 00:09:09.233 "claimed": false, 00:09:09.233 "zoned": false, 00:09:09.233 "supported_io_types": { 00:09:09.233 "read": true, 00:09:09.233 "write": true, 00:09:09.233 "unmap": true, 00:09:09.233 "flush": true, 00:09:09.233 "reset": true, 00:09:09.233 "nvme_admin": false, 00:09:09.233 "nvme_io": false, 00:09:09.233 "nvme_io_md": false, 00:09:09.233 "write_zeroes": true, 00:09:09.233 "zcopy": false, 00:09:09.233 "get_zone_info": false, 00:09:09.233 "zone_management": false, 00:09:09.233 "zone_append": false, 00:09:09.233 "compare": false, 00:09:09.233 "compare_and_write": false, 00:09:09.233 "abort": false, 00:09:09.233 "seek_hole": false, 00:09:09.233 "seek_data": false, 00:09:09.233 "copy": false, 00:09:09.233 "nvme_iov_md": false 00:09:09.233 }, 00:09:09.233 "memory_domains": [ 00:09:09.233 { 00:09:09.233 "dma_device_id": "system", 00:09:09.233 "dma_device_type": 1 00:09:09.233 }, 00:09:09.233 { 00:09:09.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.233 "dma_device_type": 2 00:09:09.233 }, 00:09:09.233 { 00:09:09.233 "dma_device_id": "system", 00:09:09.233 "dma_device_type": 1 00:09:09.233 }, 00:09:09.233 { 00:09:09.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.233 "dma_device_type": 2 00:09:09.233 }, 00:09:09.233 { 00:09:09.233 "dma_device_id": "system", 00:09:09.233 "dma_device_type": 1 00:09:09.233 }, 00:09:09.233 { 00:09:09.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.233 "dma_device_type": 2 00:09:09.233 } 00:09:09.233 ], 00:09:09.233 "driver_specific": { 00:09:09.233 "raid": { 00:09:09.233 "uuid": "5253082c-7d97-428b-a5ea-0ed198e8254e", 00:09:09.233 "strip_size_kb": 64, 00:09:09.233 "state": "online", 00:09:09.233 "raid_level": "concat", 00:09:09.233 "superblock": true, 00:09:09.233 "num_base_bdevs": 3, 00:09:09.233 "num_base_bdevs_discovered": 3, 00:09:09.233 "num_base_bdevs_operational": 3, 00:09:09.233 "base_bdevs_list": [ 00:09:09.233 { 00:09:09.233 "name": "pt1", 00:09:09.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.233 "is_configured": true, 00:09:09.233 "data_offset": 2048, 00:09:09.233 "data_size": 63488 00:09:09.233 }, 00:09:09.233 { 00:09:09.233 "name": "pt2", 00:09:09.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.233 "is_configured": true, 00:09:09.233 "data_offset": 2048, 00:09:09.233 "data_size": 63488 00:09:09.233 }, 00:09:09.233 { 00:09:09.233 "name": "pt3", 00:09:09.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:09.233 "is_configured": true, 00:09:09.233 "data_offset": 2048, 00:09:09.233 "data_size": 63488 00:09:09.233 } 00:09:09.233 ] 00:09:09.233 } 00:09:09.233 } 00:09:09.233 }' 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.233 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:09.234 pt2 00:09:09.234 pt3' 00:09:09.234 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.494 [2024-12-06 23:43:20.955759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5253082c-7d97-428b-a5ea-0ed198e8254e '!=' 5253082c-7d97-428b-a5ea-0ed198e8254e ']' 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66762 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66762 ']' 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66762 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.494 23:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66762 00:09:09.494 killing process with pid 66762 00:09:09.494 23:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.494 23:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.494 23:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66762' 00:09:09.494 23:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66762 00:09:09.494 [2024-12-06 23:43:21.023189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.494 [2024-12-06 23:43:21.023280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.494 [2024-12-06 23:43:21.023347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.494 23:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66762 00:09:09.494 [2024-12-06 23:43:21.023368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:10.084 [2024-12-06 23:43:21.357493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.025 23:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:11.025 00:09:11.025 real 0m5.297s 00:09:11.025 user 0m7.390s 00:09:11.025 sys 0m0.989s 00:09:11.025 23:43:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.025 ************************************ 00:09:11.025 END TEST raid_superblock_test 00:09:11.025 ************************************ 00:09:11.025 23:43:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.285 23:43:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:11.285 23:43:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:11.285 23:43:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.285 23:43:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.285 ************************************ 00:09:11.285 START TEST raid_read_error_test 00:09:11.285 ************************************ 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6bHkXONeVp 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67021 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67021 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67021 ']' 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.285 23:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.285 [2024-12-06 23:43:22.750993] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:09:11.285 [2024-12-06 23:43:22.751215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67021 ] 00:09:11.546 [2024-12-06 23:43:22.926036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.546 [2024-12-06 23:43:23.063029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.806 [2024-12-06 23:43:23.295983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.806 [2024-12-06 23:43:23.296119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.067 BaseBdev1_malloc 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.067 true 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.067 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.328 [2024-12-06 23:43:23.630098] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:12.328 [2024-12-06 23:43:23.630176] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.328 [2024-12-06 23:43:23.630200] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:12.328 [2024-12-06 23:43:23.630212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.328 [2024-12-06 23:43:23.632861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.328 [2024-12-06 23:43:23.632906] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:12.328 BaseBdev1 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.328 BaseBdev2_malloc 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.328 true 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.328 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.329 [2024-12-06 23:43:23.702231] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:12.329 [2024-12-06 23:43:23.702312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.329 [2024-12-06 23:43:23.702336] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:12.329 [2024-12-06 23:43:23.702351] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.329 [2024-12-06 23:43:23.705023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.329 [2024-12-06 23:43:23.705064] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:12.329 BaseBdev2 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.329 BaseBdev3_malloc 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.329 true 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.329 [2024-12-06 23:43:23.789063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:12.329 [2024-12-06 23:43:23.789130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.329 [2024-12-06 23:43:23.789150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:12.329 [2024-12-06 23:43:23.789162] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.329 [2024-12-06 23:43:23.791559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.329 [2024-12-06 23:43:23.791600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:12.329 BaseBdev3 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.329 [2024-12-06 23:43:23.801130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.329 [2024-12-06 23:43:23.803177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.329 [2024-12-06 23:43:23.803349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.329 [2024-12-06 23:43:23.803581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:12.329 [2024-12-06 23:43:23.803595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:12.329 [2024-12-06 23:43:23.803875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:12.329 [2024-12-06 23:43:23.804051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:12.329 [2024-12-06 23:43:23.804067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:12.329 [2024-12-06 23:43:23.804219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.329 "name": "raid_bdev1", 00:09:12.329 "uuid": "76d7c876-144c-4af9-8803-fcc003d749ff", 00:09:12.329 "strip_size_kb": 64, 00:09:12.329 "state": "online", 00:09:12.329 "raid_level": "concat", 00:09:12.329 "superblock": true, 00:09:12.329 "num_base_bdevs": 3, 00:09:12.329 "num_base_bdevs_discovered": 3, 00:09:12.329 "num_base_bdevs_operational": 3, 00:09:12.329 "base_bdevs_list": [ 00:09:12.329 { 00:09:12.329 "name": "BaseBdev1", 00:09:12.329 "uuid": "880d675d-bff9-5408-85d7-2d2b63274c28", 00:09:12.329 "is_configured": true, 00:09:12.329 "data_offset": 2048, 00:09:12.329 "data_size": 63488 00:09:12.329 }, 00:09:12.329 { 00:09:12.329 "name": "BaseBdev2", 00:09:12.329 "uuid": "d1a58244-1637-5498-a9de-483b60968b2f", 00:09:12.329 "is_configured": true, 00:09:12.329 "data_offset": 2048, 00:09:12.329 "data_size": 63488 00:09:12.329 }, 00:09:12.329 { 00:09:12.329 "name": "BaseBdev3", 00:09:12.329 "uuid": "99a9fb4d-f282-50d1-a5a7-b85a478aa2f3", 00:09:12.329 "is_configured": true, 00:09:12.329 "data_offset": 2048, 00:09:12.329 "data_size": 63488 00:09:12.329 } 00:09:12.329 ] 00:09:12.329 }' 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.329 23:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.900 23:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:12.900 23:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:12.900 [2024-12-06 23:43:24.313607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.840 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.840 "name": "raid_bdev1", 00:09:13.840 "uuid": "76d7c876-144c-4af9-8803-fcc003d749ff", 00:09:13.840 "strip_size_kb": 64, 00:09:13.840 "state": "online", 00:09:13.840 "raid_level": "concat", 00:09:13.840 "superblock": true, 00:09:13.840 "num_base_bdevs": 3, 00:09:13.841 "num_base_bdevs_discovered": 3, 00:09:13.841 "num_base_bdevs_operational": 3, 00:09:13.841 "base_bdevs_list": [ 00:09:13.841 { 00:09:13.841 "name": "BaseBdev1", 00:09:13.841 "uuid": "880d675d-bff9-5408-85d7-2d2b63274c28", 00:09:13.841 "is_configured": true, 00:09:13.841 "data_offset": 2048, 00:09:13.841 "data_size": 63488 00:09:13.841 }, 00:09:13.841 { 00:09:13.841 "name": "BaseBdev2", 00:09:13.841 "uuid": "d1a58244-1637-5498-a9de-483b60968b2f", 00:09:13.841 "is_configured": true, 00:09:13.841 "data_offset": 2048, 00:09:13.841 "data_size": 63488 00:09:13.841 }, 00:09:13.841 { 00:09:13.841 "name": "BaseBdev3", 00:09:13.841 "uuid": "99a9fb4d-f282-50d1-a5a7-b85a478aa2f3", 00:09:13.841 "is_configured": true, 00:09:13.841 "data_offset": 2048, 00:09:13.841 "data_size": 63488 00:09:13.841 } 00:09:13.841 ] 00:09:13.841 }' 00:09:13.841 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.841 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.411 [2024-12-06 23:43:25.694865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:14.411 [2024-12-06 23:43:25.694998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.411 [2024-12-06 23:43:25.697759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.411 [2024-12-06 23:43:25.697809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.411 [2024-12-06 23:43:25.697852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.411 [2024-12-06 23:43:25.697861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:14.411 { 00:09:14.411 "results": [ 00:09:14.411 { 00:09:14.411 "job": "raid_bdev1", 00:09:14.411 "core_mask": "0x1", 00:09:14.411 "workload": "randrw", 00:09:14.411 "percentage": 50, 00:09:14.411 "status": "finished", 00:09:14.411 "queue_depth": 1, 00:09:14.411 "io_size": 131072, 00:09:14.411 "runtime": 1.381894, 00:09:14.411 "iops": 13539.388694067708, 00:09:14.411 "mibps": 1692.4235867584634, 00:09:14.411 "io_failed": 1, 00:09:14.411 "io_timeout": 0, 00:09:14.411 "avg_latency_us": 103.79952180010405, 00:09:14.411 "min_latency_us": 25.4882096069869, 00:09:14.411 "max_latency_us": 1387.989519650655 00:09:14.411 } 00:09:14.411 ], 00:09:14.411 "core_count": 1 00:09:14.411 } 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67021 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67021 ']' 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67021 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67021 00:09:14.411 killing process with pid 67021 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67021' 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67021 00:09:14.411 [2024-12-06 23:43:25.742152] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.411 23:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67021 00:09:14.670 [2024-12-06 23:43:25.998230] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.052 23:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6bHkXONeVp 00:09:16.052 23:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:16.052 23:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:16.052 23:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:16.052 23:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:16.052 23:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.052 23:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.052 23:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:16.052 00:09:16.052 real 0m4.669s 00:09:16.052 user 0m5.386s 00:09:16.052 sys 0m0.648s 00:09:16.052 ************************************ 00:09:16.052 END TEST raid_read_error_test 00:09:16.052 ************************************ 00:09:16.052 23:43:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.052 23:43:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.052 23:43:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:16.052 23:43:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:16.052 23:43:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.052 23:43:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.052 ************************************ 00:09:16.052 START TEST raid_write_error_test 00:09:16.052 ************************************ 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bDYL1Cd4Xb 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67161 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67161 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67161 ']' 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.052 23:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.052 [2024-12-06 23:43:27.487789] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:09:16.052 [2024-12-06 23:43:27.488316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67161 ] 00:09:16.313 [2024-12-06 23:43:27.660103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.313 [2024-12-06 23:43:27.798137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.581 [2024-12-06 23:43:28.029290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.581 [2024-12-06 23:43:28.029478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.868 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.868 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:16.868 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:16.868 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:16.868 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.868 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.868 BaseBdev1_malloc 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.869 true 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.869 [2024-12-06 23:43:28.378379] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:16.869 [2024-12-06 23:43:28.378533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.869 [2024-12-06 23:43:28.378560] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:16.869 [2024-12-06 23:43:28.378573] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.869 [2024-12-06 23:43:28.380995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.869 [2024-12-06 23:43:28.381034] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:16.869 BaseBdev1 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.869 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.150 BaseBdev2_malloc 00:09:17.150 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.150 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:17.150 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.150 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.150 true 00:09:17.150 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.150 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:17.150 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.151 [2024-12-06 23:43:28.452917] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:17.151 [2024-12-06 23:43:28.452979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.151 [2024-12-06 23:43:28.452996] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:17.151 [2024-12-06 23:43:28.453008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.151 [2024-12-06 23:43:28.455396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.151 [2024-12-06 23:43:28.455523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:17.151 BaseBdev2 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.151 BaseBdev3_malloc 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.151 true 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.151 [2024-12-06 23:43:28.551532] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:17.151 [2024-12-06 23:43:28.551597] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.151 [2024-12-06 23:43:28.551616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:17.151 [2024-12-06 23:43:28.551628] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.151 [2024-12-06 23:43:28.553997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.151 [2024-12-06 23:43:28.554117] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:17.151 BaseBdev3 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.151 [2024-12-06 23:43:28.563608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.151 [2024-12-06 23:43:28.565770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.151 [2024-12-06 23:43:28.565845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.151 [2024-12-06 23:43:28.566059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:17.151 [2024-12-06 23:43:28.566078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:17.151 [2024-12-06 23:43:28.566333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:17.151 [2024-12-06 23:43:28.566497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:17.151 [2024-12-06 23:43:28.566511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:17.151 [2024-12-06 23:43:28.566654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.151 "name": "raid_bdev1", 00:09:17.151 "uuid": "31cd6c65-1f3a-4bc0-8ebe-1981d1d8a82f", 00:09:17.151 "strip_size_kb": 64, 00:09:17.151 "state": "online", 00:09:17.151 "raid_level": "concat", 00:09:17.151 "superblock": true, 00:09:17.151 "num_base_bdevs": 3, 00:09:17.151 "num_base_bdevs_discovered": 3, 00:09:17.151 "num_base_bdevs_operational": 3, 00:09:17.151 "base_bdevs_list": [ 00:09:17.151 { 00:09:17.151 "name": "BaseBdev1", 00:09:17.151 "uuid": "863855bd-4d5b-5cd4-b4c2-ffb360e07a84", 00:09:17.151 "is_configured": true, 00:09:17.151 "data_offset": 2048, 00:09:17.151 "data_size": 63488 00:09:17.151 }, 00:09:17.151 { 00:09:17.151 "name": "BaseBdev2", 00:09:17.151 "uuid": "040ef024-7649-5b68-b44f-896ba4a22e5e", 00:09:17.151 "is_configured": true, 00:09:17.151 "data_offset": 2048, 00:09:17.151 "data_size": 63488 00:09:17.151 }, 00:09:17.151 { 00:09:17.151 "name": "BaseBdev3", 00:09:17.151 "uuid": "f46116af-4985-5edd-af99-081406dab3a3", 00:09:17.151 "is_configured": true, 00:09:17.151 "data_offset": 2048, 00:09:17.151 "data_size": 63488 00:09:17.151 } 00:09:17.151 ] 00:09:17.151 }' 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.151 23:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.411 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:17.411 23:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:17.671 [2024-12-06 23:43:29.036362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.616 23:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.616 23:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.616 "name": "raid_bdev1", 00:09:18.616 "uuid": "31cd6c65-1f3a-4bc0-8ebe-1981d1d8a82f", 00:09:18.616 "strip_size_kb": 64, 00:09:18.616 "state": "online", 00:09:18.616 "raid_level": "concat", 00:09:18.616 "superblock": true, 00:09:18.616 "num_base_bdevs": 3, 00:09:18.616 "num_base_bdevs_discovered": 3, 00:09:18.616 "num_base_bdevs_operational": 3, 00:09:18.616 "base_bdevs_list": [ 00:09:18.616 { 00:09:18.616 "name": "BaseBdev1", 00:09:18.616 "uuid": "863855bd-4d5b-5cd4-b4c2-ffb360e07a84", 00:09:18.616 "is_configured": true, 00:09:18.616 "data_offset": 2048, 00:09:18.616 "data_size": 63488 00:09:18.616 }, 00:09:18.616 { 00:09:18.616 "name": "BaseBdev2", 00:09:18.616 "uuid": "040ef024-7649-5b68-b44f-896ba4a22e5e", 00:09:18.616 "is_configured": true, 00:09:18.616 "data_offset": 2048, 00:09:18.616 "data_size": 63488 00:09:18.616 }, 00:09:18.616 { 00:09:18.616 "name": "BaseBdev3", 00:09:18.616 "uuid": "f46116af-4985-5edd-af99-081406dab3a3", 00:09:18.616 "is_configured": true, 00:09:18.616 "data_offset": 2048, 00:09:18.616 "data_size": 63488 00:09:18.616 } 00:09:18.616 ] 00:09:18.616 }' 00:09:18.616 23:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.616 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.877 23:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:18.877 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.877 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.877 [2024-12-06 23:43:30.409464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.877 [2024-12-06 23:43:30.409514] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.877 [2024-12-06 23:43:30.412486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.877 [2024-12-06 23:43:30.412567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.877 [2024-12-06 23:43:30.412633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.877 [2024-12-06 23:43:30.412701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:18.877 { 00:09:18.877 "results": [ 00:09:18.877 { 00:09:18.877 "job": "raid_bdev1", 00:09:18.877 "core_mask": "0x1", 00:09:18.877 "workload": "randrw", 00:09:18.877 "percentage": 50, 00:09:18.877 "status": "finished", 00:09:18.877 "queue_depth": 1, 00:09:18.877 "io_size": 131072, 00:09:18.877 "runtime": 1.373603, 00:09:18.877 "iops": 13854.075740952809, 00:09:18.877 "mibps": 1731.759467619101, 00:09:18.877 "io_failed": 1, 00:09:18.877 "io_timeout": 0, 00:09:18.877 "avg_latency_us": 101.46126850262006, 00:09:18.877 "min_latency_us": 25.3764192139738, 00:09:18.877 "max_latency_us": 1395.1441048034935 00:09:18.877 } 00:09:18.877 ], 00:09:18.877 "core_count": 1 00:09:18.877 } 00:09:18.877 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.877 23:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67161 00:09:18.877 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67161 ']' 00:09:18.877 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67161 00:09:18.877 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:18.877 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.877 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67161 00:09:19.137 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.137 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.137 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67161' 00:09:19.137 killing process with pid 67161 00:09:19.137 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67161 00:09:19.137 [2024-12-06 23:43:30.455757] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.137 23:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67161 00:09:19.398 [2024-12-06 23:43:30.704698] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.783 23:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bDYL1Cd4Xb 00:09:20.783 23:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:20.783 23:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:20.783 23:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:20.784 23:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:20.784 23:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.784 23:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.784 23:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:20.784 00:09:20.784 real 0m4.607s 00:09:20.784 user 0m5.285s 00:09:20.784 sys 0m0.652s 00:09:20.784 ************************************ 00:09:20.784 END TEST raid_write_error_test 00:09:20.784 ************************************ 00:09:20.784 23:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.784 23:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.784 23:43:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:20.784 23:43:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:20.784 23:43:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:20.784 23:43:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.784 23:43:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.784 ************************************ 00:09:20.784 START TEST raid_state_function_test 00:09:20.784 ************************************ 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67310 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67310' 00:09:20.784 Process raid pid: 67310 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67310 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67310 ']' 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.784 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.784 [2024-12-06 23:43:32.158258] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:09:20.784 [2024-12-06 23:43:32.158938] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.784 [2024-12-06 23:43:32.334001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.045 [2024-12-06 23:43:32.471173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.304 [2024-12-06 23:43:32.702026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.304 [2024-12-06 23:43:32.702073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.565 [2024-12-06 23:43:32.975085] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.565 [2024-12-06 23:43:32.975157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.565 [2024-12-06 23:43:32.975168] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.565 [2024-12-06 23:43:32.975178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.565 [2024-12-06 23:43:32.975184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.565 [2024-12-06 23:43:32.975194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.565 23:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.565 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.565 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.565 "name": "Existed_Raid", 00:09:21.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.565 "strip_size_kb": 0, 00:09:21.565 "state": "configuring", 00:09:21.565 "raid_level": "raid1", 00:09:21.565 "superblock": false, 00:09:21.565 "num_base_bdevs": 3, 00:09:21.566 "num_base_bdevs_discovered": 0, 00:09:21.566 "num_base_bdevs_operational": 3, 00:09:21.566 "base_bdevs_list": [ 00:09:21.566 { 00:09:21.566 "name": "BaseBdev1", 00:09:21.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.566 "is_configured": false, 00:09:21.566 "data_offset": 0, 00:09:21.566 "data_size": 0 00:09:21.566 }, 00:09:21.566 { 00:09:21.566 "name": "BaseBdev2", 00:09:21.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.566 "is_configured": false, 00:09:21.566 "data_offset": 0, 00:09:21.566 "data_size": 0 00:09:21.566 }, 00:09:21.566 { 00:09:21.566 "name": "BaseBdev3", 00:09:21.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.566 "is_configured": false, 00:09:21.566 "data_offset": 0, 00:09:21.566 "data_size": 0 00:09:21.566 } 00:09:21.566 ] 00:09:21.566 }' 00:09:21.566 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.566 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.136 [2024-12-06 23:43:33.398367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.136 [2024-12-06 23:43:33.398510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.136 [2024-12-06 23:43:33.410311] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.136 [2024-12-06 23:43:33.410363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.136 [2024-12-06 23:43:33.410373] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.136 [2024-12-06 23:43:33.410383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.136 [2024-12-06 23:43:33.410388] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.136 [2024-12-06 23:43:33.410398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.136 [2024-12-06 23:43:33.464979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.136 BaseBdev1 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.136 [ 00:09:22.136 { 00:09:22.136 "name": "BaseBdev1", 00:09:22.136 "aliases": [ 00:09:22.136 "e49fbd3b-2cbb-4ee0-8479-395fda0897b4" 00:09:22.136 ], 00:09:22.136 "product_name": "Malloc disk", 00:09:22.136 "block_size": 512, 00:09:22.136 "num_blocks": 65536, 00:09:22.136 "uuid": "e49fbd3b-2cbb-4ee0-8479-395fda0897b4", 00:09:22.136 "assigned_rate_limits": { 00:09:22.136 "rw_ios_per_sec": 0, 00:09:22.136 "rw_mbytes_per_sec": 0, 00:09:22.136 "r_mbytes_per_sec": 0, 00:09:22.136 "w_mbytes_per_sec": 0 00:09:22.136 }, 00:09:22.136 "claimed": true, 00:09:22.136 "claim_type": "exclusive_write", 00:09:22.136 "zoned": false, 00:09:22.136 "supported_io_types": { 00:09:22.136 "read": true, 00:09:22.136 "write": true, 00:09:22.136 "unmap": true, 00:09:22.136 "flush": true, 00:09:22.136 "reset": true, 00:09:22.136 "nvme_admin": false, 00:09:22.136 "nvme_io": false, 00:09:22.136 "nvme_io_md": false, 00:09:22.136 "write_zeroes": true, 00:09:22.136 "zcopy": true, 00:09:22.136 "get_zone_info": false, 00:09:22.136 "zone_management": false, 00:09:22.136 "zone_append": false, 00:09:22.136 "compare": false, 00:09:22.136 "compare_and_write": false, 00:09:22.136 "abort": true, 00:09:22.136 "seek_hole": false, 00:09:22.136 "seek_data": false, 00:09:22.136 "copy": true, 00:09:22.136 "nvme_iov_md": false 00:09:22.136 }, 00:09:22.136 "memory_domains": [ 00:09:22.136 { 00:09:22.136 "dma_device_id": "system", 00:09:22.136 "dma_device_type": 1 00:09:22.136 }, 00:09:22.136 { 00:09:22.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.136 "dma_device_type": 2 00:09:22.136 } 00:09:22.136 ], 00:09:22.136 "driver_specific": {} 00:09:22.136 } 00:09:22.136 ] 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.136 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.137 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.137 "name": "Existed_Raid", 00:09:22.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.137 "strip_size_kb": 0, 00:09:22.137 "state": "configuring", 00:09:22.137 "raid_level": "raid1", 00:09:22.137 "superblock": false, 00:09:22.137 "num_base_bdevs": 3, 00:09:22.137 "num_base_bdevs_discovered": 1, 00:09:22.137 "num_base_bdevs_operational": 3, 00:09:22.137 "base_bdevs_list": [ 00:09:22.137 { 00:09:22.137 "name": "BaseBdev1", 00:09:22.137 "uuid": "e49fbd3b-2cbb-4ee0-8479-395fda0897b4", 00:09:22.137 "is_configured": true, 00:09:22.137 "data_offset": 0, 00:09:22.137 "data_size": 65536 00:09:22.137 }, 00:09:22.137 { 00:09:22.137 "name": "BaseBdev2", 00:09:22.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.137 "is_configured": false, 00:09:22.137 "data_offset": 0, 00:09:22.137 "data_size": 0 00:09:22.137 }, 00:09:22.137 { 00:09:22.137 "name": "BaseBdev3", 00:09:22.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.137 "is_configured": false, 00:09:22.137 "data_offset": 0, 00:09:22.137 "data_size": 0 00:09:22.137 } 00:09:22.137 ] 00:09:22.137 }' 00:09:22.137 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.137 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.708 [2024-12-06 23:43:33.964162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.708 [2024-12-06 23:43:33.964319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.708 [2024-12-06 23:43:33.976201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.708 [2024-12-06 23:43:33.978454] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.708 [2024-12-06 23:43:33.978537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.708 [2024-12-06 23:43:33.978572] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.708 [2024-12-06 23:43:33.978598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.708 23:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.708 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.708 "name": "Existed_Raid", 00:09:22.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.708 "strip_size_kb": 0, 00:09:22.708 "state": "configuring", 00:09:22.708 "raid_level": "raid1", 00:09:22.708 "superblock": false, 00:09:22.708 "num_base_bdevs": 3, 00:09:22.708 "num_base_bdevs_discovered": 1, 00:09:22.708 "num_base_bdevs_operational": 3, 00:09:22.708 "base_bdevs_list": [ 00:09:22.708 { 00:09:22.708 "name": "BaseBdev1", 00:09:22.708 "uuid": "e49fbd3b-2cbb-4ee0-8479-395fda0897b4", 00:09:22.708 "is_configured": true, 00:09:22.708 "data_offset": 0, 00:09:22.708 "data_size": 65536 00:09:22.708 }, 00:09:22.708 { 00:09:22.708 "name": "BaseBdev2", 00:09:22.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.708 "is_configured": false, 00:09:22.708 "data_offset": 0, 00:09:22.708 "data_size": 0 00:09:22.708 }, 00:09:22.708 { 00:09:22.708 "name": "BaseBdev3", 00:09:22.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.708 "is_configured": false, 00:09:22.708 "data_offset": 0, 00:09:22.708 "data_size": 0 00:09:22.708 } 00:09:22.708 ] 00:09:22.708 }' 00:09:22.708 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.708 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.968 [2024-12-06 23:43:34.441395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.968 BaseBdev2 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.968 [ 00:09:22.968 { 00:09:22.968 "name": "BaseBdev2", 00:09:22.968 "aliases": [ 00:09:22.968 "a4d33880-e65c-48bb-ad74-7f04451d1f17" 00:09:22.968 ], 00:09:22.968 "product_name": "Malloc disk", 00:09:22.968 "block_size": 512, 00:09:22.968 "num_blocks": 65536, 00:09:22.968 "uuid": "a4d33880-e65c-48bb-ad74-7f04451d1f17", 00:09:22.968 "assigned_rate_limits": { 00:09:22.968 "rw_ios_per_sec": 0, 00:09:22.968 "rw_mbytes_per_sec": 0, 00:09:22.968 "r_mbytes_per_sec": 0, 00:09:22.968 "w_mbytes_per_sec": 0 00:09:22.968 }, 00:09:22.968 "claimed": true, 00:09:22.968 "claim_type": "exclusive_write", 00:09:22.968 "zoned": false, 00:09:22.968 "supported_io_types": { 00:09:22.968 "read": true, 00:09:22.968 "write": true, 00:09:22.968 "unmap": true, 00:09:22.968 "flush": true, 00:09:22.968 "reset": true, 00:09:22.968 "nvme_admin": false, 00:09:22.968 "nvme_io": false, 00:09:22.968 "nvme_io_md": false, 00:09:22.968 "write_zeroes": true, 00:09:22.968 "zcopy": true, 00:09:22.968 "get_zone_info": false, 00:09:22.968 "zone_management": false, 00:09:22.968 "zone_append": false, 00:09:22.968 "compare": false, 00:09:22.968 "compare_and_write": false, 00:09:22.968 "abort": true, 00:09:22.968 "seek_hole": false, 00:09:22.968 "seek_data": false, 00:09:22.968 "copy": true, 00:09:22.968 "nvme_iov_md": false 00:09:22.968 }, 00:09:22.968 "memory_domains": [ 00:09:22.968 { 00:09:22.968 "dma_device_id": "system", 00:09:22.968 "dma_device_type": 1 00:09:22.968 }, 00:09:22.968 { 00:09:22.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.968 "dma_device_type": 2 00:09:22.968 } 00:09:22.968 ], 00:09:22.968 "driver_specific": {} 00:09:22.968 } 00:09:22.968 ] 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.968 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.227 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.227 "name": "Existed_Raid", 00:09:23.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.227 "strip_size_kb": 0, 00:09:23.227 "state": "configuring", 00:09:23.227 "raid_level": "raid1", 00:09:23.227 "superblock": false, 00:09:23.227 "num_base_bdevs": 3, 00:09:23.227 "num_base_bdevs_discovered": 2, 00:09:23.227 "num_base_bdevs_operational": 3, 00:09:23.227 "base_bdevs_list": [ 00:09:23.227 { 00:09:23.227 "name": "BaseBdev1", 00:09:23.227 "uuid": "e49fbd3b-2cbb-4ee0-8479-395fda0897b4", 00:09:23.227 "is_configured": true, 00:09:23.227 "data_offset": 0, 00:09:23.227 "data_size": 65536 00:09:23.227 }, 00:09:23.227 { 00:09:23.227 "name": "BaseBdev2", 00:09:23.227 "uuid": "a4d33880-e65c-48bb-ad74-7f04451d1f17", 00:09:23.227 "is_configured": true, 00:09:23.227 "data_offset": 0, 00:09:23.227 "data_size": 65536 00:09:23.227 }, 00:09:23.227 { 00:09:23.227 "name": "BaseBdev3", 00:09:23.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.227 "is_configured": false, 00:09:23.227 "data_offset": 0, 00:09:23.227 "data_size": 0 00:09:23.227 } 00:09:23.227 ] 00:09:23.227 }' 00:09:23.227 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.227 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.487 [2024-12-06 23:43:34.979229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.487 [2024-12-06 23:43:34.979360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:23.487 [2024-12-06 23:43:34.979381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:23.487 [2024-12-06 23:43:34.979721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:23.487 [2024-12-06 23:43:34.979928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:23.487 [2024-12-06 23:43:34.979939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:23.487 BaseBdev3 00:09:23.487 [2024-12-06 23:43:34.980211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.487 23:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.487 [ 00:09:23.487 { 00:09:23.487 "name": "BaseBdev3", 00:09:23.487 "aliases": [ 00:09:23.487 "9148812d-44f9-4f86-821d-8c0e413bb8cf" 00:09:23.487 ], 00:09:23.487 "product_name": "Malloc disk", 00:09:23.487 "block_size": 512, 00:09:23.487 "num_blocks": 65536, 00:09:23.487 "uuid": "9148812d-44f9-4f86-821d-8c0e413bb8cf", 00:09:23.487 "assigned_rate_limits": { 00:09:23.487 "rw_ios_per_sec": 0, 00:09:23.487 "rw_mbytes_per_sec": 0, 00:09:23.487 "r_mbytes_per_sec": 0, 00:09:23.487 "w_mbytes_per_sec": 0 00:09:23.487 }, 00:09:23.487 "claimed": true, 00:09:23.487 "claim_type": "exclusive_write", 00:09:23.487 "zoned": false, 00:09:23.487 "supported_io_types": { 00:09:23.487 "read": true, 00:09:23.487 "write": true, 00:09:23.487 "unmap": true, 00:09:23.487 "flush": true, 00:09:23.487 "reset": true, 00:09:23.487 "nvme_admin": false, 00:09:23.487 "nvme_io": false, 00:09:23.487 "nvme_io_md": false, 00:09:23.487 "write_zeroes": true, 00:09:23.487 "zcopy": true, 00:09:23.487 "get_zone_info": false, 00:09:23.487 "zone_management": false, 00:09:23.487 "zone_append": false, 00:09:23.487 "compare": false, 00:09:23.487 "compare_and_write": false, 00:09:23.487 "abort": true, 00:09:23.487 "seek_hole": false, 00:09:23.487 "seek_data": false, 00:09:23.487 "copy": true, 00:09:23.487 "nvme_iov_md": false 00:09:23.487 }, 00:09:23.487 "memory_domains": [ 00:09:23.487 { 00:09:23.487 "dma_device_id": "system", 00:09:23.487 "dma_device_type": 1 00:09:23.487 }, 00:09:23.487 { 00:09:23.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.487 "dma_device_type": 2 00:09:23.487 } 00:09:23.487 ], 00:09:23.487 "driver_specific": {} 00:09:23.487 } 00:09:23.487 ] 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.487 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.746 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.746 "name": "Existed_Raid", 00:09:23.746 "uuid": "d533192c-579a-4a28-a3be-c7c4ec33fb1e", 00:09:23.746 "strip_size_kb": 0, 00:09:23.746 "state": "online", 00:09:23.746 "raid_level": "raid1", 00:09:23.746 "superblock": false, 00:09:23.746 "num_base_bdevs": 3, 00:09:23.746 "num_base_bdevs_discovered": 3, 00:09:23.746 "num_base_bdevs_operational": 3, 00:09:23.746 "base_bdevs_list": [ 00:09:23.746 { 00:09:23.746 "name": "BaseBdev1", 00:09:23.746 "uuid": "e49fbd3b-2cbb-4ee0-8479-395fda0897b4", 00:09:23.746 "is_configured": true, 00:09:23.746 "data_offset": 0, 00:09:23.746 "data_size": 65536 00:09:23.746 }, 00:09:23.746 { 00:09:23.746 "name": "BaseBdev2", 00:09:23.746 "uuid": "a4d33880-e65c-48bb-ad74-7f04451d1f17", 00:09:23.746 "is_configured": true, 00:09:23.746 "data_offset": 0, 00:09:23.746 "data_size": 65536 00:09:23.746 }, 00:09:23.746 { 00:09:23.746 "name": "BaseBdev3", 00:09:23.746 "uuid": "9148812d-44f9-4f86-821d-8c0e413bb8cf", 00:09:23.746 "is_configured": true, 00:09:23.746 "data_offset": 0, 00:09:23.746 "data_size": 65536 00:09:23.746 } 00:09:23.746 ] 00:09:23.746 }' 00:09:23.746 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.746 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.005 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:24.005 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:24.005 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.005 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.005 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.005 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.005 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:24.005 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.005 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.005 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.005 [2024-12-06 23:43:35.486741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.005 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.005 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.005 "name": "Existed_Raid", 00:09:24.005 "aliases": [ 00:09:24.005 "d533192c-579a-4a28-a3be-c7c4ec33fb1e" 00:09:24.005 ], 00:09:24.005 "product_name": "Raid Volume", 00:09:24.005 "block_size": 512, 00:09:24.005 "num_blocks": 65536, 00:09:24.005 "uuid": "d533192c-579a-4a28-a3be-c7c4ec33fb1e", 00:09:24.005 "assigned_rate_limits": { 00:09:24.005 "rw_ios_per_sec": 0, 00:09:24.005 "rw_mbytes_per_sec": 0, 00:09:24.005 "r_mbytes_per_sec": 0, 00:09:24.005 "w_mbytes_per_sec": 0 00:09:24.005 }, 00:09:24.005 "claimed": false, 00:09:24.005 "zoned": false, 00:09:24.005 "supported_io_types": { 00:09:24.005 "read": true, 00:09:24.005 "write": true, 00:09:24.005 "unmap": false, 00:09:24.005 "flush": false, 00:09:24.005 "reset": true, 00:09:24.005 "nvme_admin": false, 00:09:24.005 "nvme_io": false, 00:09:24.005 "nvme_io_md": false, 00:09:24.005 "write_zeroes": true, 00:09:24.005 "zcopy": false, 00:09:24.005 "get_zone_info": false, 00:09:24.005 "zone_management": false, 00:09:24.005 "zone_append": false, 00:09:24.005 "compare": false, 00:09:24.005 "compare_and_write": false, 00:09:24.005 "abort": false, 00:09:24.005 "seek_hole": false, 00:09:24.005 "seek_data": false, 00:09:24.005 "copy": false, 00:09:24.005 "nvme_iov_md": false 00:09:24.005 }, 00:09:24.005 "memory_domains": [ 00:09:24.005 { 00:09:24.005 "dma_device_id": "system", 00:09:24.005 "dma_device_type": 1 00:09:24.005 }, 00:09:24.005 { 00:09:24.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.005 "dma_device_type": 2 00:09:24.005 }, 00:09:24.005 { 00:09:24.005 "dma_device_id": "system", 00:09:24.005 "dma_device_type": 1 00:09:24.005 }, 00:09:24.005 { 00:09:24.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.005 "dma_device_type": 2 00:09:24.005 }, 00:09:24.005 { 00:09:24.005 "dma_device_id": "system", 00:09:24.005 "dma_device_type": 1 00:09:24.005 }, 00:09:24.005 { 00:09:24.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.005 "dma_device_type": 2 00:09:24.005 } 00:09:24.005 ], 00:09:24.005 "driver_specific": { 00:09:24.005 "raid": { 00:09:24.005 "uuid": "d533192c-579a-4a28-a3be-c7c4ec33fb1e", 00:09:24.005 "strip_size_kb": 0, 00:09:24.005 "state": "online", 00:09:24.005 "raid_level": "raid1", 00:09:24.005 "superblock": false, 00:09:24.005 "num_base_bdevs": 3, 00:09:24.005 "num_base_bdevs_discovered": 3, 00:09:24.005 "num_base_bdevs_operational": 3, 00:09:24.005 "base_bdevs_list": [ 00:09:24.005 { 00:09:24.005 "name": "BaseBdev1", 00:09:24.006 "uuid": "e49fbd3b-2cbb-4ee0-8479-395fda0897b4", 00:09:24.006 "is_configured": true, 00:09:24.006 "data_offset": 0, 00:09:24.006 "data_size": 65536 00:09:24.006 }, 00:09:24.006 { 00:09:24.006 "name": "BaseBdev2", 00:09:24.006 "uuid": "a4d33880-e65c-48bb-ad74-7f04451d1f17", 00:09:24.006 "is_configured": true, 00:09:24.006 "data_offset": 0, 00:09:24.006 "data_size": 65536 00:09:24.006 }, 00:09:24.006 { 00:09:24.006 "name": "BaseBdev3", 00:09:24.006 "uuid": "9148812d-44f9-4f86-821d-8c0e413bb8cf", 00:09:24.006 "is_configured": true, 00:09:24.006 "data_offset": 0, 00:09:24.006 "data_size": 65536 00:09:24.006 } 00:09:24.006 ] 00:09:24.006 } 00:09:24.006 } 00:09:24.006 }' 00:09:24.006 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.006 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:24.006 BaseBdev2 00:09:24.006 BaseBdev3' 00:09:24.006 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:24.264 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.265 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.265 [2024-12-06 23:43:35.754029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.524 "name": "Existed_Raid", 00:09:24.524 "uuid": "d533192c-579a-4a28-a3be-c7c4ec33fb1e", 00:09:24.524 "strip_size_kb": 0, 00:09:24.524 "state": "online", 00:09:24.524 "raid_level": "raid1", 00:09:24.524 "superblock": false, 00:09:24.524 "num_base_bdevs": 3, 00:09:24.524 "num_base_bdevs_discovered": 2, 00:09:24.524 "num_base_bdevs_operational": 2, 00:09:24.524 "base_bdevs_list": [ 00:09:24.524 { 00:09:24.524 "name": null, 00:09:24.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.524 "is_configured": false, 00:09:24.524 "data_offset": 0, 00:09:24.524 "data_size": 65536 00:09:24.524 }, 00:09:24.524 { 00:09:24.524 "name": "BaseBdev2", 00:09:24.524 "uuid": "a4d33880-e65c-48bb-ad74-7f04451d1f17", 00:09:24.524 "is_configured": true, 00:09:24.524 "data_offset": 0, 00:09:24.524 "data_size": 65536 00:09:24.524 }, 00:09:24.524 { 00:09:24.524 "name": "BaseBdev3", 00:09:24.524 "uuid": "9148812d-44f9-4f86-821d-8c0e413bb8cf", 00:09:24.524 "is_configured": true, 00:09:24.524 "data_offset": 0, 00:09:24.524 "data_size": 65536 00:09:24.524 } 00:09:24.524 ] 00:09:24.524 }' 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.524 23:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.784 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:24.784 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.784 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.784 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.784 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.784 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.784 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.784 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.784 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.784 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:24.784 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.784 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.784 [2024-12-06 23:43:36.336090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.043 [2024-12-06 23:43:36.493908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:25.043 [2024-12-06 23:43:36.494098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.043 [2024-12-06 23:43:36.596403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.043 [2024-12-06 23:43:36.596459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.043 [2024-12-06 23:43:36.596483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.043 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.303 BaseBdev2 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.303 [ 00:09:25.303 { 00:09:25.303 "name": "BaseBdev2", 00:09:25.303 "aliases": [ 00:09:25.303 "795859dc-0965-4857-a784-bd855b8bf734" 00:09:25.303 ], 00:09:25.303 "product_name": "Malloc disk", 00:09:25.303 "block_size": 512, 00:09:25.303 "num_blocks": 65536, 00:09:25.303 "uuid": "795859dc-0965-4857-a784-bd855b8bf734", 00:09:25.303 "assigned_rate_limits": { 00:09:25.303 "rw_ios_per_sec": 0, 00:09:25.303 "rw_mbytes_per_sec": 0, 00:09:25.303 "r_mbytes_per_sec": 0, 00:09:25.303 "w_mbytes_per_sec": 0 00:09:25.303 }, 00:09:25.303 "claimed": false, 00:09:25.303 "zoned": false, 00:09:25.303 "supported_io_types": { 00:09:25.303 "read": true, 00:09:25.303 "write": true, 00:09:25.303 "unmap": true, 00:09:25.303 "flush": true, 00:09:25.303 "reset": true, 00:09:25.303 "nvme_admin": false, 00:09:25.303 "nvme_io": false, 00:09:25.303 "nvme_io_md": false, 00:09:25.303 "write_zeroes": true, 00:09:25.303 "zcopy": true, 00:09:25.303 "get_zone_info": false, 00:09:25.303 "zone_management": false, 00:09:25.303 "zone_append": false, 00:09:25.303 "compare": false, 00:09:25.303 "compare_and_write": false, 00:09:25.303 "abort": true, 00:09:25.303 "seek_hole": false, 00:09:25.303 "seek_data": false, 00:09:25.303 "copy": true, 00:09:25.303 "nvme_iov_md": false 00:09:25.303 }, 00:09:25.303 "memory_domains": [ 00:09:25.303 { 00:09:25.303 "dma_device_id": "system", 00:09:25.303 "dma_device_type": 1 00:09:25.303 }, 00:09:25.303 { 00:09:25.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.303 "dma_device_type": 2 00:09:25.303 } 00:09:25.303 ], 00:09:25.303 "driver_specific": {} 00:09:25.303 } 00:09:25.303 ] 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.303 BaseBdev3 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.303 [ 00:09:25.303 { 00:09:25.303 "name": "BaseBdev3", 00:09:25.303 "aliases": [ 00:09:25.303 "afe4d597-793f-4058-9391-b6888c856afa" 00:09:25.303 ], 00:09:25.303 "product_name": "Malloc disk", 00:09:25.303 "block_size": 512, 00:09:25.303 "num_blocks": 65536, 00:09:25.303 "uuid": "afe4d597-793f-4058-9391-b6888c856afa", 00:09:25.303 "assigned_rate_limits": { 00:09:25.303 "rw_ios_per_sec": 0, 00:09:25.303 "rw_mbytes_per_sec": 0, 00:09:25.303 "r_mbytes_per_sec": 0, 00:09:25.303 "w_mbytes_per_sec": 0 00:09:25.303 }, 00:09:25.303 "claimed": false, 00:09:25.303 "zoned": false, 00:09:25.303 "supported_io_types": { 00:09:25.303 "read": true, 00:09:25.303 "write": true, 00:09:25.303 "unmap": true, 00:09:25.303 "flush": true, 00:09:25.303 "reset": true, 00:09:25.303 "nvme_admin": false, 00:09:25.303 "nvme_io": false, 00:09:25.303 "nvme_io_md": false, 00:09:25.303 "write_zeroes": true, 00:09:25.303 "zcopy": true, 00:09:25.303 "get_zone_info": false, 00:09:25.303 "zone_management": false, 00:09:25.303 "zone_append": false, 00:09:25.303 "compare": false, 00:09:25.303 "compare_and_write": false, 00:09:25.303 "abort": true, 00:09:25.303 "seek_hole": false, 00:09:25.303 "seek_data": false, 00:09:25.303 "copy": true, 00:09:25.303 "nvme_iov_md": false 00:09:25.303 }, 00:09:25.303 "memory_domains": [ 00:09:25.303 { 00:09:25.303 "dma_device_id": "system", 00:09:25.303 "dma_device_type": 1 00:09:25.303 }, 00:09:25.303 { 00:09:25.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.303 "dma_device_type": 2 00:09:25.303 } 00:09:25.303 ], 00:09:25.303 "driver_specific": {} 00:09:25.303 } 00:09:25.303 ] 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.303 [2024-12-06 23:43:36.823112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.303 [2024-12-06 23:43:36.823248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.303 [2024-12-06 23:43:36.823273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.303 [2024-12-06 23:43:36.825337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.303 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.304 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.563 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.563 "name": "Existed_Raid", 00:09:25.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.563 "strip_size_kb": 0, 00:09:25.563 "state": "configuring", 00:09:25.563 "raid_level": "raid1", 00:09:25.563 "superblock": false, 00:09:25.563 "num_base_bdevs": 3, 00:09:25.563 "num_base_bdevs_discovered": 2, 00:09:25.563 "num_base_bdevs_operational": 3, 00:09:25.563 "base_bdevs_list": [ 00:09:25.563 { 00:09:25.563 "name": "BaseBdev1", 00:09:25.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.563 "is_configured": false, 00:09:25.563 "data_offset": 0, 00:09:25.563 "data_size": 0 00:09:25.563 }, 00:09:25.563 { 00:09:25.563 "name": "BaseBdev2", 00:09:25.563 "uuid": "795859dc-0965-4857-a784-bd855b8bf734", 00:09:25.563 "is_configured": true, 00:09:25.563 "data_offset": 0, 00:09:25.563 "data_size": 65536 00:09:25.563 }, 00:09:25.563 { 00:09:25.563 "name": "BaseBdev3", 00:09:25.563 "uuid": "afe4d597-793f-4058-9391-b6888c856afa", 00:09:25.563 "is_configured": true, 00:09:25.563 "data_offset": 0, 00:09:25.563 "data_size": 65536 00:09:25.563 } 00:09:25.563 ] 00:09:25.563 }' 00:09:25.563 23:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.563 23:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.823 [2024-12-06 23:43:37.306443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.823 "name": "Existed_Raid", 00:09:25.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.823 "strip_size_kb": 0, 00:09:25.823 "state": "configuring", 00:09:25.823 "raid_level": "raid1", 00:09:25.823 "superblock": false, 00:09:25.823 "num_base_bdevs": 3, 00:09:25.823 "num_base_bdevs_discovered": 1, 00:09:25.823 "num_base_bdevs_operational": 3, 00:09:25.823 "base_bdevs_list": [ 00:09:25.823 { 00:09:25.823 "name": "BaseBdev1", 00:09:25.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.823 "is_configured": false, 00:09:25.823 "data_offset": 0, 00:09:25.823 "data_size": 0 00:09:25.823 }, 00:09:25.823 { 00:09:25.823 "name": null, 00:09:25.823 "uuid": "795859dc-0965-4857-a784-bd855b8bf734", 00:09:25.823 "is_configured": false, 00:09:25.823 "data_offset": 0, 00:09:25.823 "data_size": 65536 00:09:25.823 }, 00:09:25.823 { 00:09:25.823 "name": "BaseBdev3", 00:09:25.823 "uuid": "afe4d597-793f-4058-9391-b6888c856afa", 00:09:25.823 "is_configured": true, 00:09:25.823 "data_offset": 0, 00:09:25.823 "data_size": 65536 00:09:25.823 } 00:09:25.823 ] 00:09:25.823 }' 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.823 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.391 [2024-12-06 23:43:37.848345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.391 BaseBdev1 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.391 [ 00:09:26.391 { 00:09:26.391 "name": "BaseBdev1", 00:09:26.391 "aliases": [ 00:09:26.391 "ef2eb4f0-a767-4c32-8221-0249580cd5e6" 00:09:26.391 ], 00:09:26.391 "product_name": "Malloc disk", 00:09:26.391 "block_size": 512, 00:09:26.391 "num_blocks": 65536, 00:09:26.391 "uuid": "ef2eb4f0-a767-4c32-8221-0249580cd5e6", 00:09:26.391 "assigned_rate_limits": { 00:09:26.391 "rw_ios_per_sec": 0, 00:09:26.391 "rw_mbytes_per_sec": 0, 00:09:26.391 "r_mbytes_per_sec": 0, 00:09:26.391 "w_mbytes_per_sec": 0 00:09:26.391 }, 00:09:26.391 "claimed": true, 00:09:26.391 "claim_type": "exclusive_write", 00:09:26.391 "zoned": false, 00:09:26.391 "supported_io_types": { 00:09:26.391 "read": true, 00:09:26.391 "write": true, 00:09:26.391 "unmap": true, 00:09:26.391 "flush": true, 00:09:26.391 "reset": true, 00:09:26.391 "nvme_admin": false, 00:09:26.391 "nvme_io": false, 00:09:26.391 "nvme_io_md": false, 00:09:26.391 "write_zeroes": true, 00:09:26.391 "zcopy": true, 00:09:26.391 "get_zone_info": false, 00:09:26.391 "zone_management": false, 00:09:26.391 "zone_append": false, 00:09:26.391 "compare": false, 00:09:26.391 "compare_and_write": false, 00:09:26.391 "abort": true, 00:09:26.391 "seek_hole": false, 00:09:26.391 "seek_data": false, 00:09:26.391 "copy": true, 00:09:26.391 "nvme_iov_md": false 00:09:26.391 }, 00:09:26.391 "memory_domains": [ 00:09:26.391 { 00:09:26.391 "dma_device_id": "system", 00:09:26.391 "dma_device_type": 1 00:09:26.391 }, 00:09:26.391 { 00:09:26.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.391 "dma_device_type": 2 00:09:26.391 } 00:09:26.391 ], 00:09:26.391 "driver_specific": {} 00:09:26.391 } 00:09:26.391 ] 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.391 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.392 "name": "Existed_Raid", 00:09:26.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.392 "strip_size_kb": 0, 00:09:26.392 "state": "configuring", 00:09:26.392 "raid_level": "raid1", 00:09:26.392 "superblock": false, 00:09:26.392 "num_base_bdevs": 3, 00:09:26.392 "num_base_bdevs_discovered": 2, 00:09:26.392 "num_base_bdevs_operational": 3, 00:09:26.392 "base_bdevs_list": [ 00:09:26.392 { 00:09:26.392 "name": "BaseBdev1", 00:09:26.392 "uuid": "ef2eb4f0-a767-4c32-8221-0249580cd5e6", 00:09:26.392 "is_configured": true, 00:09:26.392 "data_offset": 0, 00:09:26.392 "data_size": 65536 00:09:26.392 }, 00:09:26.392 { 00:09:26.392 "name": null, 00:09:26.392 "uuid": "795859dc-0965-4857-a784-bd855b8bf734", 00:09:26.392 "is_configured": false, 00:09:26.392 "data_offset": 0, 00:09:26.392 "data_size": 65536 00:09:26.392 }, 00:09:26.392 { 00:09:26.392 "name": "BaseBdev3", 00:09:26.392 "uuid": "afe4d597-793f-4058-9391-b6888c856afa", 00:09:26.392 "is_configured": true, 00:09:26.392 "data_offset": 0, 00:09:26.392 "data_size": 65536 00:09:26.392 } 00:09:26.392 ] 00:09:26.392 }' 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.392 23:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.958 [2024-12-06 23:43:38.331556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.958 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.958 "name": "Existed_Raid", 00:09:26.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.958 "strip_size_kb": 0, 00:09:26.958 "state": "configuring", 00:09:26.958 "raid_level": "raid1", 00:09:26.958 "superblock": false, 00:09:26.958 "num_base_bdevs": 3, 00:09:26.958 "num_base_bdevs_discovered": 1, 00:09:26.958 "num_base_bdevs_operational": 3, 00:09:26.958 "base_bdevs_list": [ 00:09:26.958 { 00:09:26.958 "name": "BaseBdev1", 00:09:26.958 "uuid": "ef2eb4f0-a767-4c32-8221-0249580cd5e6", 00:09:26.958 "is_configured": true, 00:09:26.958 "data_offset": 0, 00:09:26.958 "data_size": 65536 00:09:26.958 }, 00:09:26.958 { 00:09:26.958 "name": null, 00:09:26.958 "uuid": "795859dc-0965-4857-a784-bd855b8bf734", 00:09:26.958 "is_configured": false, 00:09:26.958 "data_offset": 0, 00:09:26.958 "data_size": 65536 00:09:26.958 }, 00:09:26.958 { 00:09:26.958 "name": null, 00:09:26.958 "uuid": "afe4d597-793f-4058-9391-b6888c856afa", 00:09:26.958 "is_configured": false, 00:09:26.958 "data_offset": 0, 00:09:26.958 "data_size": 65536 00:09:26.958 } 00:09:26.958 ] 00:09:26.959 }' 00:09:26.959 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.959 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.217 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.217 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.217 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.217 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.480 [2024-12-06 23:43:38.798904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.480 "name": "Existed_Raid", 00:09:27.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.480 "strip_size_kb": 0, 00:09:27.480 "state": "configuring", 00:09:27.480 "raid_level": "raid1", 00:09:27.480 "superblock": false, 00:09:27.480 "num_base_bdevs": 3, 00:09:27.480 "num_base_bdevs_discovered": 2, 00:09:27.480 "num_base_bdevs_operational": 3, 00:09:27.480 "base_bdevs_list": [ 00:09:27.480 { 00:09:27.480 "name": "BaseBdev1", 00:09:27.480 "uuid": "ef2eb4f0-a767-4c32-8221-0249580cd5e6", 00:09:27.480 "is_configured": true, 00:09:27.480 "data_offset": 0, 00:09:27.480 "data_size": 65536 00:09:27.480 }, 00:09:27.480 { 00:09:27.480 "name": null, 00:09:27.480 "uuid": "795859dc-0965-4857-a784-bd855b8bf734", 00:09:27.480 "is_configured": false, 00:09:27.480 "data_offset": 0, 00:09:27.480 "data_size": 65536 00:09:27.480 }, 00:09:27.480 { 00:09:27.480 "name": "BaseBdev3", 00:09:27.480 "uuid": "afe4d597-793f-4058-9391-b6888c856afa", 00:09:27.480 "is_configured": true, 00:09:27.480 "data_offset": 0, 00:09:27.480 "data_size": 65536 00:09:27.480 } 00:09:27.480 ] 00:09:27.480 }' 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.480 23:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.746 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.746 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.746 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.746 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.746 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.746 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:27.746 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:27.746 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.746 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.746 [2024-12-06 23:43:39.298062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.004 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.004 "name": "Existed_Raid", 00:09:28.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.004 "strip_size_kb": 0, 00:09:28.004 "state": "configuring", 00:09:28.004 "raid_level": "raid1", 00:09:28.005 "superblock": false, 00:09:28.005 "num_base_bdevs": 3, 00:09:28.005 "num_base_bdevs_discovered": 1, 00:09:28.005 "num_base_bdevs_operational": 3, 00:09:28.005 "base_bdevs_list": [ 00:09:28.005 { 00:09:28.005 "name": null, 00:09:28.005 "uuid": "ef2eb4f0-a767-4c32-8221-0249580cd5e6", 00:09:28.005 "is_configured": false, 00:09:28.005 "data_offset": 0, 00:09:28.005 "data_size": 65536 00:09:28.005 }, 00:09:28.005 { 00:09:28.005 "name": null, 00:09:28.005 "uuid": "795859dc-0965-4857-a784-bd855b8bf734", 00:09:28.005 "is_configured": false, 00:09:28.005 "data_offset": 0, 00:09:28.005 "data_size": 65536 00:09:28.005 }, 00:09:28.005 { 00:09:28.005 "name": "BaseBdev3", 00:09:28.005 "uuid": "afe4d597-793f-4058-9391-b6888c856afa", 00:09:28.005 "is_configured": true, 00:09:28.005 "data_offset": 0, 00:09:28.005 "data_size": 65536 00:09:28.005 } 00:09:28.005 ] 00:09:28.005 }' 00:09:28.005 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.005 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.572 [2024-12-06 23:43:39.898045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.572 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.572 "name": "Existed_Raid", 00:09:28.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.572 "strip_size_kb": 0, 00:09:28.572 "state": "configuring", 00:09:28.572 "raid_level": "raid1", 00:09:28.572 "superblock": false, 00:09:28.572 "num_base_bdevs": 3, 00:09:28.572 "num_base_bdevs_discovered": 2, 00:09:28.572 "num_base_bdevs_operational": 3, 00:09:28.572 "base_bdevs_list": [ 00:09:28.572 { 00:09:28.572 "name": null, 00:09:28.572 "uuid": "ef2eb4f0-a767-4c32-8221-0249580cd5e6", 00:09:28.572 "is_configured": false, 00:09:28.572 "data_offset": 0, 00:09:28.572 "data_size": 65536 00:09:28.572 }, 00:09:28.572 { 00:09:28.572 "name": "BaseBdev2", 00:09:28.572 "uuid": "795859dc-0965-4857-a784-bd855b8bf734", 00:09:28.572 "is_configured": true, 00:09:28.572 "data_offset": 0, 00:09:28.572 "data_size": 65536 00:09:28.572 }, 00:09:28.572 { 00:09:28.572 "name": "BaseBdev3", 00:09:28.572 "uuid": "afe4d597-793f-4058-9391-b6888c856afa", 00:09:28.572 "is_configured": true, 00:09:28.572 "data_offset": 0, 00:09:28.572 "data_size": 65536 00:09:28.572 } 00:09:28.572 ] 00:09:28.573 }' 00:09:28.573 23:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.573 23:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.832 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:28.832 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.832 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.832 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.832 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.832 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:28.832 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:28.832 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.832 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.832 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ef2eb4f0-a767-4c32-8221-0249580cd5e6 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.092 [2024-12-06 23:43:40.451052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:29.092 [2024-12-06 23:43:40.451193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:29.092 [2024-12-06 23:43:40.451219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:29.092 [2024-12-06 23:43:40.451511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:29.092 [2024-12-06 23:43:40.451726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:29.092 [2024-12-06 23:43:40.451770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:29.092 [2024-12-06 23:43:40.452059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.092 NewBaseBdev 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.092 [ 00:09:29.092 { 00:09:29.092 "name": "NewBaseBdev", 00:09:29.092 "aliases": [ 00:09:29.092 "ef2eb4f0-a767-4c32-8221-0249580cd5e6" 00:09:29.092 ], 00:09:29.092 "product_name": "Malloc disk", 00:09:29.092 "block_size": 512, 00:09:29.092 "num_blocks": 65536, 00:09:29.092 "uuid": "ef2eb4f0-a767-4c32-8221-0249580cd5e6", 00:09:29.092 "assigned_rate_limits": { 00:09:29.092 "rw_ios_per_sec": 0, 00:09:29.092 "rw_mbytes_per_sec": 0, 00:09:29.092 "r_mbytes_per_sec": 0, 00:09:29.092 "w_mbytes_per_sec": 0 00:09:29.092 }, 00:09:29.092 "claimed": true, 00:09:29.092 "claim_type": "exclusive_write", 00:09:29.092 "zoned": false, 00:09:29.092 "supported_io_types": { 00:09:29.092 "read": true, 00:09:29.092 "write": true, 00:09:29.092 "unmap": true, 00:09:29.092 "flush": true, 00:09:29.092 "reset": true, 00:09:29.092 "nvme_admin": false, 00:09:29.092 "nvme_io": false, 00:09:29.092 "nvme_io_md": false, 00:09:29.092 "write_zeroes": true, 00:09:29.092 "zcopy": true, 00:09:29.092 "get_zone_info": false, 00:09:29.092 "zone_management": false, 00:09:29.092 "zone_append": false, 00:09:29.092 "compare": false, 00:09:29.092 "compare_and_write": false, 00:09:29.092 "abort": true, 00:09:29.092 "seek_hole": false, 00:09:29.092 "seek_data": false, 00:09:29.092 "copy": true, 00:09:29.092 "nvme_iov_md": false 00:09:29.092 }, 00:09:29.092 "memory_domains": [ 00:09:29.092 { 00:09:29.092 "dma_device_id": "system", 00:09:29.092 "dma_device_type": 1 00:09:29.092 }, 00:09:29.092 { 00:09:29.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.092 "dma_device_type": 2 00:09:29.092 } 00:09:29.092 ], 00:09:29.092 "driver_specific": {} 00:09:29.092 } 00:09:29.092 ] 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.092 "name": "Existed_Raid", 00:09:29.092 "uuid": "c1017b0f-bfa6-4820-9d20-b48f8f429e55", 00:09:29.092 "strip_size_kb": 0, 00:09:29.092 "state": "online", 00:09:29.092 "raid_level": "raid1", 00:09:29.092 "superblock": false, 00:09:29.092 "num_base_bdevs": 3, 00:09:29.092 "num_base_bdevs_discovered": 3, 00:09:29.092 "num_base_bdevs_operational": 3, 00:09:29.092 "base_bdevs_list": [ 00:09:29.092 { 00:09:29.092 "name": "NewBaseBdev", 00:09:29.092 "uuid": "ef2eb4f0-a767-4c32-8221-0249580cd5e6", 00:09:29.092 "is_configured": true, 00:09:29.092 "data_offset": 0, 00:09:29.092 "data_size": 65536 00:09:29.092 }, 00:09:29.092 { 00:09:29.092 "name": "BaseBdev2", 00:09:29.092 "uuid": "795859dc-0965-4857-a784-bd855b8bf734", 00:09:29.092 "is_configured": true, 00:09:29.092 "data_offset": 0, 00:09:29.092 "data_size": 65536 00:09:29.092 }, 00:09:29.092 { 00:09:29.092 "name": "BaseBdev3", 00:09:29.092 "uuid": "afe4d597-793f-4058-9391-b6888c856afa", 00:09:29.092 "is_configured": true, 00:09:29.092 "data_offset": 0, 00:09:29.092 "data_size": 65536 00:09:29.092 } 00:09:29.092 ] 00:09:29.092 }' 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.092 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.664 [2024-12-06 23:43:40.950632] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.664 "name": "Existed_Raid", 00:09:29.664 "aliases": [ 00:09:29.664 "c1017b0f-bfa6-4820-9d20-b48f8f429e55" 00:09:29.664 ], 00:09:29.664 "product_name": "Raid Volume", 00:09:29.664 "block_size": 512, 00:09:29.664 "num_blocks": 65536, 00:09:29.664 "uuid": "c1017b0f-bfa6-4820-9d20-b48f8f429e55", 00:09:29.664 "assigned_rate_limits": { 00:09:29.664 "rw_ios_per_sec": 0, 00:09:29.664 "rw_mbytes_per_sec": 0, 00:09:29.664 "r_mbytes_per_sec": 0, 00:09:29.664 "w_mbytes_per_sec": 0 00:09:29.664 }, 00:09:29.664 "claimed": false, 00:09:29.664 "zoned": false, 00:09:29.664 "supported_io_types": { 00:09:29.664 "read": true, 00:09:29.664 "write": true, 00:09:29.664 "unmap": false, 00:09:29.664 "flush": false, 00:09:29.664 "reset": true, 00:09:29.664 "nvme_admin": false, 00:09:29.664 "nvme_io": false, 00:09:29.664 "nvme_io_md": false, 00:09:29.664 "write_zeroes": true, 00:09:29.664 "zcopy": false, 00:09:29.664 "get_zone_info": false, 00:09:29.664 "zone_management": false, 00:09:29.664 "zone_append": false, 00:09:29.664 "compare": false, 00:09:29.664 "compare_and_write": false, 00:09:29.664 "abort": false, 00:09:29.664 "seek_hole": false, 00:09:29.664 "seek_data": false, 00:09:29.664 "copy": false, 00:09:29.664 "nvme_iov_md": false 00:09:29.664 }, 00:09:29.664 "memory_domains": [ 00:09:29.664 { 00:09:29.664 "dma_device_id": "system", 00:09:29.664 "dma_device_type": 1 00:09:29.664 }, 00:09:29.664 { 00:09:29.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.664 "dma_device_type": 2 00:09:29.664 }, 00:09:29.664 { 00:09:29.664 "dma_device_id": "system", 00:09:29.664 "dma_device_type": 1 00:09:29.664 }, 00:09:29.664 { 00:09:29.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.664 "dma_device_type": 2 00:09:29.664 }, 00:09:29.664 { 00:09:29.664 "dma_device_id": "system", 00:09:29.664 "dma_device_type": 1 00:09:29.664 }, 00:09:29.664 { 00:09:29.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.664 "dma_device_type": 2 00:09:29.664 } 00:09:29.664 ], 00:09:29.664 "driver_specific": { 00:09:29.664 "raid": { 00:09:29.664 "uuid": "c1017b0f-bfa6-4820-9d20-b48f8f429e55", 00:09:29.664 "strip_size_kb": 0, 00:09:29.664 "state": "online", 00:09:29.664 "raid_level": "raid1", 00:09:29.664 "superblock": false, 00:09:29.664 "num_base_bdevs": 3, 00:09:29.664 "num_base_bdevs_discovered": 3, 00:09:29.664 "num_base_bdevs_operational": 3, 00:09:29.664 "base_bdevs_list": [ 00:09:29.664 { 00:09:29.664 "name": "NewBaseBdev", 00:09:29.664 "uuid": "ef2eb4f0-a767-4c32-8221-0249580cd5e6", 00:09:29.664 "is_configured": true, 00:09:29.664 "data_offset": 0, 00:09:29.664 "data_size": 65536 00:09:29.664 }, 00:09:29.664 { 00:09:29.664 "name": "BaseBdev2", 00:09:29.664 "uuid": "795859dc-0965-4857-a784-bd855b8bf734", 00:09:29.664 "is_configured": true, 00:09:29.664 "data_offset": 0, 00:09:29.664 "data_size": 65536 00:09:29.664 }, 00:09:29.664 { 00:09:29.664 "name": "BaseBdev3", 00:09:29.664 "uuid": "afe4d597-793f-4058-9391-b6888c856afa", 00:09:29.664 "is_configured": true, 00:09:29.664 "data_offset": 0, 00:09:29.664 "data_size": 65536 00:09:29.664 } 00:09:29.664 ] 00:09:29.664 } 00:09:29.664 } 00:09:29.664 }' 00:09:29.664 23:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:29.664 BaseBdev2 00:09:29.664 BaseBdev3' 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:29.664 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.665 [2024-12-06 23:43:41.201896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.665 [2024-12-06 23:43:41.201952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.665 [2024-12-06 23:43:41.202065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.665 [2024-12-06 23:43:41.202434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.665 [2024-12-06 23:43:41.202446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67310 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67310 ']' 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67310 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.665 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67310 00:09:29.924 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.924 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.924 killing process with pid 67310 00:09:29.924 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67310' 00:09:29.924 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67310 00:09:29.924 [2024-12-06 23:43:41.248139] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.924 23:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67310 00:09:30.184 [2024-12-06 23:43:41.577226] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.565 ************************************ 00:09:31.565 END TEST raid_state_function_test 00:09:31.565 ************************************ 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:31.565 00:09:31.565 real 0m10.768s 00:09:31.565 user 0m16.907s 00:09:31.565 sys 0m1.909s 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.565 23:43:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:31.565 23:43:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:31.565 23:43:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.565 23:43:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.565 ************************************ 00:09:31.565 START TEST raid_state_function_test_sb 00:09:31.565 ************************************ 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:31.565 Process raid pid: 67934 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67934 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67934' 00:09:31.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67934 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67934 ']' 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.565 23:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.565 [2024-12-06 23:43:42.993059] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:09:31.565 [2024-12-06 23:43:42.993261] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.825 [2024-12-06 23:43:43.169960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.825 [2024-12-06 23:43:43.306618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.085 [2024-12-06 23:43:43.549147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.085 [2024-12-06 23:43:43.549301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.346 [2024-12-06 23:43:43.836162] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.346 [2024-12-06 23:43:43.836320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.346 [2024-12-06 23:43:43.836357] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.346 [2024-12-06 23:43:43.836382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.346 [2024-12-06 23:43:43.836399] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.346 [2024-12-06 23:43:43.836421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.346 "name": "Existed_Raid", 00:09:32.346 "uuid": "2fc6f918-e753-4ce8-a497-c8f56acccb62", 00:09:32.346 "strip_size_kb": 0, 00:09:32.346 "state": "configuring", 00:09:32.346 "raid_level": "raid1", 00:09:32.346 "superblock": true, 00:09:32.346 "num_base_bdevs": 3, 00:09:32.346 "num_base_bdevs_discovered": 0, 00:09:32.346 "num_base_bdevs_operational": 3, 00:09:32.346 "base_bdevs_list": [ 00:09:32.346 { 00:09:32.346 "name": "BaseBdev1", 00:09:32.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.346 "is_configured": false, 00:09:32.346 "data_offset": 0, 00:09:32.346 "data_size": 0 00:09:32.346 }, 00:09:32.346 { 00:09:32.346 "name": "BaseBdev2", 00:09:32.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.346 "is_configured": false, 00:09:32.346 "data_offset": 0, 00:09:32.346 "data_size": 0 00:09:32.346 }, 00:09:32.346 { 00:09:32.346 "name": "BaseBdev3", 00:09:32.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.346 "is_configured": false, 00:09:32.346 "data_offset": 0, 00:09:32.346 "data_size": 0 00:09:32.346 } 00:09:32.346 ] 00:09:32.346 }' 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.346 23:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.916 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.916 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.916 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.917 [2024-12-06 23:43:44.283410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.917 [2024-12-06 23:43:44.283544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.917 [2024-12-06 23:43:44.295367] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.917 [2024-12-06 23:43:44.295418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.917 [2024-12-06 23:43:44.295429] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.917 [2024-12-06 23:43:44.295439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.917 [2024-12-06 23:43:44.295445] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.917 [2024-12-06 23:43:44.295455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.917 [2024-12-06 23:43:44.349469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.917 BaseBdev1 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.917 [ 00:09:32.917 { 00:09:32.917 "name": "BaseBdev1", 00:09:32.917 "aliases": [ 00:09:32.917 "32193375-7fdb-46fd-bb00-e788c427e51a" 00:09:32.917 ], 00:09:32.917 "product_name": "Malloc disk", 00:09:32.917 "block_size": 512, 00:09:32.917 "num_blocks": 65536, 00:09:32.917 "uuid": "32193375-7fdb-46fd-bb00-e788c427e51a", 00:09:32.917 "assigned_rate_limits": { 00:09:32.917 "rw_ios_per_sec": 0, 00:09:32.917 "rw_mbytes_per_sec": 0, 00:09:32.917 "r_mbytes_per_sec": 0, 00:09:32.917 "w_mbytes_per_sec": 0 00:09:32.917 }, 00:09:32.917 "claimed": true, 00:09:32.917 "claim_type": "exclusive_write", 00:09:32.917 "zoned": false, 00:09:32.917 "supported_io_types": { 00:09:32.917 "read": true, 00:09:32.917 "write": true, 00:09:32.917 "unmap": true, 00:09:32.917 "flush": true, 00:09:32.917 "reset": true, 00:09:32.917 "nvme_admin": false, 00:09:32.917 "nvme_io": false, 00:09:32.917 "nvme_io_md": false, 00:09:32.917 "write_zeroes": true, 00:09:32.917 "zcopy": true, 00:09:32.917 "get_zone_info": false, 00:09:32.917 "zone_management": false, 00:09:32.917 "zone_append": false, 00:09:32.917 "compare": false, 00:09:32.917 "compare_and_write": false, 00:09:32.917 "abort": true, 00:09:32.917 "seek_hole": false, 00:09:32.917 "seek_data": false, 00:09:32.917 "copy": true, 00:09:32.917 "nvme_iov_md": false 00:09:32.917 }, 00:09:32.917 "memory_domains": [ 00:09:32.917 { 00:09:32.917 "dma_device_id": "system", 00:09:32.917 "dma_device_type": 1 00:09:32.917 }, 00:09:32.917 { 00:09:32.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.917 "dma_device_type": 2 00:09:32.917 } 00:09:32.917 ], 00:09:32.917 "driver_specific": {} 00:09:32.917 } 00:09:32.917 ] 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.917 "name": "Existed_Raid", 00:09:32.917 "uuid": "e518d8be-633d-492e-a8b5-9ac7fea20d2b", 00:09:32.917 "strip_size_kb": 0, 00:09:32.917 "state": "configuring", 00:09:32.917 "raid_level": "raid1", 00:09:32.917 "superblock": true, 00:09:32.917 "num_base_bdevs": 3, 00:09:32.917 "num_base_bdevs_discovered": 1, 00:09:32.917 "num_base_bdevs_operational": 3, 00:09:32.917 "base_bdevs_list": [ 00:09:32.917 { 00:09:32.917 "name": "BaseBdev1", 00:09:32.917 "uuid": "32193375-7fdb-46fd-bb00-e788c427e51a", 00:09:32.917 "is_configured": true, 00:09:32.917 "data_offset": 2048, 00:09:32.917 "data_size": 63488 00:09:32.917 }, 00:09:32.917 { 00:09:32.917 "name": "BaseBdev2", 00:09:32.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.917 "is_configured": false, 00:09:32.917 "data_offset": 0, 00:09:32.917 "data_size": 0 00:09:32.917 }, 00:09:32.917 { 00:09:32.917 "name": "BaseBdev3", 00:09:32.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.917 "is_configured": false, 00:09:32.917 "data_offset": 0, 00:09:32.917 "data_size": 0 00:09:32.917 } 00:09:32.917 ] 00:09:32.917 }' 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.917 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.491 [2024-12-06 23:43:44.832737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.491 [2024-12-06 23:43:44.832901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.491 [2024-12-06 23:43:44.840740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.491 [2024-12-06 23:43:44.842894] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.491 [2024-12-06 23:43:44.842973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.491 [2024-12-06 23:43:44.843007] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.491 [2024-12-06 23:43:44.843030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.491 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.492 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.492 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.492 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.492 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.492 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.492 "name": "Existed_Raid", 00:09:33.492 "uuid": "dd0dc3a7-0c5b-4ad7-b801-84252abd5896", 00:09:33.492 "strip_size_kb": 0, 00:09:33.492 "state": "configuring", 00:09:33.492 "raid_level": "raid1", 00:09:33.492 "superblock": true, 00:09:33.492 "num_base_bdevs": 3, 00:09:33.492 "num_base_bdevs_discovered": 1, 00:09:33.492 "num_base_bdevs_operational": 3, 00:09:33.492 "base_bdevs_list": [ 00:09:33.492 { 00:09:33.492 "name": "BaseBdev1", 00:09:33.492 "uuid": "32193375-7fdb-46fd-bb00-e788c427e51a", 00:09:33.492 "is_configured": true, 00:09:33.492 "data_offset": 2048, 00:09:33.492 "data_size": 63488 00:09:33.492 }, 00:09:33.492 { 00:09:33.492 "name": "BaseBdev2", 00:09:33.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.492 "is_configured": false, 00:09:33.492 "data_offset": 0, 00:09:33.492 "data_size": 0 00:09:33.492 }, 00:09:33.492 { 00:09:33.492 "name": "BaseBdev3", 00:09:33.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.492 "is_configured": false, 00:09:33.492 "data_offset": 0, 00:09:33.492 "data_size": 0 00:09:33.492 } 00:09:33.492 ] 00:09:33.492 }' 00:09:33.492 23:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.492 23:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.753 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.753 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.753 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.753 [2024-12-06 23:43:45.312520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.012 BaseBdev2 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.012 [ 00:09:34.012 { 00:09:34.012 "name": "BaseBdev2", 00:09:34.012 "aliases": [ 00:09:34.012 "1faa080a-c913-4bed-aefe-58a190b29017" 00:09:34.012 ], 00:09:34.012 "product_name": "Malloc disk", 00:09:34.012 "block_size": 512, 00:09:34.012 "num_blocks": 65536, 00:09:34.012 "uuid": "1faa080a-c913-4bed-aefe-58a190b29017", 00:09:34.012 "assigned_rate_limits": { 00:09:34.012 "rw_ios_per_sec": 0, 00:09:34.012 "rw_mbytes_per_sec": 0, 00:09:34.012 "r_mbytes_per_sec": 0, 00:09:34.012 "w_mbytes_per_sec": 0 00:09:34.012 }, 00:09:34.012 "claimed": true, 00:09:34.012 "claim_type": "exclusive_write", 00:09:34.012 "zoned": false, 00:09:34.012 "supported_io_types": { 00:09:34.012 "read": true, 00:09:34.012 "write": true, 00:09:34.012 "unmap": true, 00:09:34.012 "flush": true, 00:09:34.012 "reset": true, 00:09:34.012 "nvme_admin": false, 00:09:34.012 "nvme_io": false, 00:09:34.012 "nvme_io_md": false, 00:09:34.012 "write_zeroes": true, 00:09:34.012 "zcopy": true, 00:09:34.012 "get_zone_info": false, 00:09:34.012 "zone_management": false, 00:09:34.012 "zone_append": false, 00:09:34.012 "compare": false, 00:09:34.012 "compare_and_write": false, 00:09:34.012 "abort": true, 00:09:34.012 "seek_hole": false, 00:09:34.012 "seek_data": false, 00:09:34.012 "copy": true, 00:09:34.012 "nvme_iov_md": false 00:09:34.012 }, 00:09:34.012 "memory_domains": [ 00:09:34.012 { 00:09:34.012 "dma_device_id": "system", 00:09:34.012 "dma_device_type": 1 00:09:34.012 }, 00:09:34.012 { 00:09:34.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.012 "dma_device_type": 2 00:09:34.012 } 00:09:34.012 ], 00:09:34.012 "driver_specific": {} 00:09:34.012 } 00:09:34.012 ] 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.012 "name": "Existed_Raid", 00:09:34.012 "uuid": "dd0dc3a7-0c5b-4ad7-b801-84252abd5896", 00:09:34.012 "strip_size_kb": 0, 00:09:34.012 "state": "configuring", 00:09:34.012 "raid_level": "raid1", 00:09:34.012 "superblock": true, 00:09:34.012 "num_base_bdevs": 3, 00:09:34.012 "num_base_bdevs_discovered": 2, 00:09:34.012 "num_base_bdevs_operational": 3, 00:09:34.012 "base_bdevs_list": [ 00:09:34.012 { 00:09:34.012 "name": "BaseBdev1", 00:09:34.012 "uuid": "32193375-7fdb-46fd-bb00-e788c427e51a", 00:09:34.012 "is_configured": true, 00:09:34.012 "data_offset": 2048, 00:09:34.012 "data_size": 63488 00:09:34.012 }, 00:09:34.012 { 00:09:34.012 "name": "BaseBdev2", 00:09:34.012 "uuid": "1faa080a-c913-4bed-aefe-58a190b29017", 00:09:34.012 "is_configured": true, 00:09:34.012 "data_offset": 2048, 00:09:34.012 "data_size": 63488 00:09:34.012 }, 00:09:34.012 { 00:09:34.012 "name": "BaseBdev3", 00:09:34.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.012 "is_configured": false, 00:09:34.012 "data_offset": 0, 00:09:34.012 "data_size": 0 00:09:34.012 } 00:09:34.012 ] 00:09:34.012 }' 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.012 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.272 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.272 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.272 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.531 [2024-12-06 23:43:45.843336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.531 [2024-12-06 23:43:45.843762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:34.531 [2024-12-06 23:43:45.843828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:34.531 [2024-12-06 23:43:45.844171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:34.531 BaseBdev3 00:09:34.531 [2024-12-06 23:43:45.844382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:34.531 [2024-12-06 23:43:45.844401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:34.531 [2024-12-06 23:43:45.844567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.531 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.531 [ 00:09:34.531 { 00:09:34.531 "name": "BaseBdev3", 00:09:34.531 "aliases": [ 00:09:34.531 "d7281b8c-8510-4cf8-a931-e3ede95b801a" 00:09:34.531 ], 00:09:34.531 "product_name": "Malloc disk", 00:09:34.531 "block_size": 512, 00:09:34.531 "num_blocks": 65536, 00:09:34.531 "uuid": "d7281b8c-8510-4cf8-a931-e3ede95b801a", 00:09:34.531 "assigned_rate_limits": { 00:09:34.531 "rw_ios_per_sec": 0, 00:09:34.531 "rw_mbytes_per_sec": 0, 00:09:34.531 "r_mbytes_per_sec": 0, 00:09:34.531 "w_mbytes_per_sec": 0 00:09:34.531 }, 00:09:34.531 "claimed": true, 00:09:34.531 "claim_type": "exclusive_write", 00:09:34.531 "zoned": false, 00:09:34.531 "supported_io_types": { 00:09:34.531 "read": true, 00:09:34.531 "write": true, 00:09:34.531 "unmap": true, 00:09:34.531 "flush": true, 00:09:34.531 "reset": true, 00:09:34.531 "nvme_admin": false, 00:09:34.531 "nvme_io": false, 00:09:34.531 "nvme_io_md": false, 00:09:34.531 "write_zeroes": true, 00:09:34.531 "zcopy": true, 00:09:34.531 "get_zone_info": false, 00:09:34.531 "zone_management": false, 00:09:34.531 "zone_append": false, 00:09:34.531 "compare": false, 00:09:34.531 "compare_and_write": false, 00:09:34.531 "abort": true, 00:09:34.531 "seek_hole": false, 00:09:34.531 "seek_data": false, 00:09:34.531 "copy": true, 00:09:34.531 "nvme_iov_md": false 00:09:34.531 }, 00:09:34.531 "memory_domains": [ 00:09:34.532 { 00:09:34.532 "dma_device_id": "system", 00:09:34.532 "dma_device_type": 1 00:09:34.532 }, 00:09:34.532 { 00:09:34.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.532 "dma_device_type": 2 00:09:34.532 } 00:09:34.532 ], 00:09:34.532 "driver_specific": {} 00:09:34.532 } 00:09:34.532 ] 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.532 "name": "Existed_Raid", 00:09:34.532 "uuid": "dd0dc3a7-0c5b-4ad7-b801-84252abd5896", 00:09:34.532 "strip_size_kb": 0, 00:09:34.532 "state": "online", 00:09:34.532 "raid_level": "raid1", 00:09:34.532 "superblock": true, 00:09:34.532 "num_base_bdevs": 3, 00:09:34.532 "num_base_bdevs_discovered": 3, 00:09:34.532 "num_base_bdevs_operational": 3, 00:09:34.532 "base_bdevs_list": [ 00:09:34.532 { 00:09:34.532 "name": "BaseBdev1", 00:09:34.532 "uuid": "32193375-7fdb-46fd-bb00-e788c427e51a", 00:09:34.532 "is_configured": true, 00:09:34.532 "data_offset": 2048, 00:09:34.532 "data_size": 63488 00:09:34.532 }, 00:09:34.532 { 00:09:34.532 "name": "BaseBdev2", 00:09:34.532 "uuid": "1faa080a-c913-4bed-aefe-58a190b29017", 00:09:34.532 "is_configured": true, 00:09:34.532 "data_offset": 2048, 00:09:34.532 "data_size": 63488 00:09:34.532 }, 00:09:34.532 { 00:09:34.532 "name": "BaseBdev3", 00:09:34.532 "uuid": "d7281b8c-8510-4cf8-a931-e3ede95b801a", 00:09:34.532 "is_configured": true, 00:09:34.532 "data_offset": 2048, 00:09:34.532 "data_size": 63488 00:09:34.532 } 00:09:34.532 ] 00:09:34.532 }' 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.532 23:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.792 [2024-12-06 23:43:46.283019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.792 "name": "Existed_Raid", 00:09:34.792 "aliases": [ 00:09:34.792 "dd0dc3a7-0c5b-4ad7-b801-84252abd5896" 00:09:34.792 ], 00:09:34.792 "product_name": "Raid Volume", 00:09:34.792 "block_size": 512, 00:09:34.792 "num_blocks": 63488, 00:09:34.792 "uuid": "dd0dc3a7-0c5b-4ad7-b801-84252abd5896", 00:09:34.792 "assigned_rate_limits": { 00:09:34.792 "rw_ios_per_sec": 0, 00:09:34.792 "rw_mbytes_per_sec": 0, 00:09:34.792 "r_mbytes_per_sec": 0, 00:09:34.792 "w_mbytes_per_sec": 0 00:09:34.792 }, 00:09:34.792 "claimed": false, 00:09:34.792 "zoned": false, 00:09:34.792 "supported_io_types": { 00:09:34.792 "read": true, 00:09:34.792 "write": true, 00:09:34.792 "unmap": false, 00:09:34.792 "flush": false, 00:09:34.792 "reset": true, 00:09:34.792 "nvme_admin": false, 00:09:34.792 "nvme_io": false, 00:09:34.792 "nvme_io_md": false, 00:09:34.792 "write_zeroes": true, 00:09:34.792 "zcopy": false, 00:09:34.792 "get_zone_info": false, 00:09:34.792 "zone_management": false, 00:09:34.792 "zone_append": false, 00:09:34.792 "compare": false, 00:09:34.792 "compare_and_write": false, 00:09:34.792 "abort": false, 00:09:34.792 "seek_hole": false, 00:09:34.792 "seek_data": false, 00:09:34.792 "copy": false, 00:09:34.792 "nvme_iov_md": false 00:09:34.792 }, 00:09:34.792 "memory_domains": [ 00:09:34.792 { 00:09:34.792 "dma_device_id": "system", 00:09:34.792 "dma_device_type": 1 00:09:34.792 }, 00:09:34.792 { 00:09:34.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.792 "dma_device_type": 2 00:09:34.792 }, 00:09:34.792 { 00:09:34.792 "dma_device_id": "system", 00:09:34.792 "dma_device_type": 1 00:09:34.792 }, 00:09:34.792 { 00:09:34.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.792 "dma_device_type": 2 00:09:34.792 }, 00:09:34.792 { 00:09:34.792 "dma_device_id": "system", 00:09:34.792 "dma_device_type": 1 00:09:34.792 }, 00:09:34.792 { 00:09:34.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.792 "dma_device_type": 2 00:09:34.792 } 00:09:34.792 ], 00:09:34.792 "driver_specific": { 00:09:34.792 "raid": { 00:09:34.792 "uuid": "dd0dc3a7-0c5b-4ad7-b801-84252abd5896", 00:09:34.792 "strip_size_kb": 0, 00:09:34.792 "state": "online", 00:09:34.792 "raid_level": "raid1", 00:09:34.792 "superblock": true, 00:09:34.792 "num_base_bdevs": 3, 00:09:34.792 "num_base_bdevs_discovered": 3, 00:09:34.792 "num_base_bdevs_operational": 3, 00:09:34.792 "base_bdevs_list": [ 00:09:34.792 { 00:09:34.792 "name": "BaseBdev1", 00:09:34.792 "uuid": "32193375-7fdb-46fd-bb00-e788c427e51a", 00:09:34.792 "is_configured": true, 00:09:34.792 "data_offset": 2048, 00:09:34.792 "data_size": 63488 00:09:34.792 }, 00:09:34.792 { 00:09:34.792 "name": "BaseBdev2", 00:09:34.792 "uuid": "1faa080a-c913-4bed-aefe-58a190b29017", 00:09:34.792 "is_configured": true, 00:09:34.792 "data_offset": 2048, 00:09:34.792 "data_size": 63488 00:09:34.792 }, 00:09:34.792 { 00:09:34.792 "name": "BaseBdev3", 00:09:34.792 "uuid": "d7281b8c-8510-4cf8-a931-e3ede95b801a", 00:09:34.792 "is_configured": true, 00:09:34.792 "data_offset": 2048, 00:09:34.792 "data_size": 63488 00:09:34.792 } 00:09:34.792 ] 00:09:34.792 } 00:09:34.792 } 00:09:34.792 }' 00:09:34.792 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.053 BaseBdev2 00:09:35.053 BaseBdev3' 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.053 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.053 [2024-12-06 23:43:46.582156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.313 "name": "Existed_Raid", 00:09:35.313 "uuid": "dd0dc3a7-0c5b-4ad7-b801-84252abd5896", 00:09:35.313 "strip_size_kb": 0, 00:09:35.313 "state": "online", 00:09:35.313 "raid_level": "raid1", 00:09:35.313 "superblock": true, 00:09:35.313 "num_base_bdevs": 3, 00:09:35.313 "num_base_bdevs_discovered": 2, 00:09:35.313 "num_base_bdevs_operational": 2, 00:09:35.313 "base_bdevs_list": [ 00:09:35.313 { 00:09:35.313 "name": null, 00:09:35.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.313 "is_configured": false, 00:09:35.313 "data_offset": 0, 00:09:35.313 "data_size": 63488 00:09:35.313 }, 00:09:35.313 { 00:09:35.313 "name": "BaseBdev2", 00:09:35.313 "uuid": "1faa080a-c913-4bed-aefe-58a190b29017", 00:09:35.313 "is_configured": true, 00:09:35.313 "data_offset": 2048, 00:09:35.313 "data_size": 63488 00:09:35.313 }, 00:09:35.313 { 00:09:35.313 "name": "BaseBdev3", 00:09:35.313 "uuid": "d7281b8c-8510-4cf8-a931-e3ede95b801a", 00:09:35.313 "is_configured": true, 00:09:35.313 "data_offset": 2048, 00:09:35.313 "data_size": 63488 00:09:35.313 } 00:09:35.313 ] 00:09:35.313 }' 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.313 23:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.573 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:35.573 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.573 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.573 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.843 [2024-12-06 23:43:47.188448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.843 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.844 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.844 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.844 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.844 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.844 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.844 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:35.844 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.844 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.844 [2024-12-06 23:43:47.352709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:35.844 [2024-12-06 23:43:47.352843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.121 [2024-12-06 23:43:47.460404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.121 [2024-12-06 23:43:47.460472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.121 [2024-12-06 23:43:47.460486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.121 BaseBdev2 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.121 [ 00:09:36.121 { 00:09:36.121 "name": "BaseBdev2", 00:09:36.121 "aliases": [ 00:09:36.121 "cda4d1c5-2bf8-4fb5-b45c-abb9f3b047f3" 00:09:36.121 ], 00:09:36.121 "product_name": "Malloc disk", 00:09:36.121 "block_size": 512, 00:09:36.121 "num_blocks": 65536, 00:09:36.121 "uuid": "cda4d1c5-2bf8-4fb5-b45c-abb9f3b047f3", 00:09:36.121 "assigned_rate_limits": { 00:09:36.121 "rw_ios_per_sec": 0, 00:09:36.121 "rw_mbytes_per_sec": 0, 00:09:36.121 "r_mbytes_per_sec": 0, 00:09:36.121 "w_mbytes_per_sec": 0 00:09:36.121 }, 00:09:36.121 "claimed": false, 00:09:36.121 "zoned": false, 00:09:36.121 "supported_io_types": { 00:09:36.121 "read": true, 00:09:36.121 "write": true, 00:09:36.121 "unmap": true, 00:09:36.121 "flush": true, 00:09:36.121 "reset": true, 00:09:36.121 "nvme_admin": false, 00:09:36.121 "nvme_io": false, 00:09:36.121 "nvme_io_md": false, 00:09:36.121 "write_zeroes": true, 00:09:36.121 "zcopy": true, 00:09:36.121 "get_zone_info": false, 00:09:36.121 "zone_management": false, 00:09:36.121 "zone_append": false, 00:09:36.121 "compare": false, 00:09:36.121 "compare_and_write": false, 00:09:36.121 "abort": true, 00:09:36.121 "seek_hole": false, 00:09:36.121 "seek_data": false, 00:09:36.121 "copy": true, 00:09:36.121 "nvme_iov_md": false 00:09:36.121 }, 00:09:36.121 "memory_domains": [ 00:09:36.121 { 00:09:36.121 "dma_device_id": "system", 00:09:36.121 "dma_device_type": 1 00:09:36.121 }, 00:09:36.121 { 00:09:36.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.121 "dma_device_type": 2 00:09:36.121 } 00:09:36.121 ], 00:09:36.121 "driver_specific": {} 00:09:36.121 } 00:09:36.121 ] 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.121 BaseBdev3 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:36.121 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.122 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.122 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.122 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.122 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.122 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.122 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.122 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.122 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.122 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.122 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.122 [ 00:09:36.122 { 00:09:36.122 "name": "BaseBdev3", 00:09:36.122 "aliases": [ 00:09:36.122 "7b1a7357-fbf5-4202-a2ad-bd5449e7248e" 00:09:36.122 ], 00:09:36.122 "product_name": "Malloc disk", 00:09:36.122 "block_size": 512, 00:09:36.122 "num_blocks": 65536, 00:09:36.122 "uuid": "7b1a7357-fbf5-4202-a2ad-bd5449e7248e", 00:09:36.122 "assigned_rate_limits": { 00:09:36.122 "rw_ios_per_sec": 0, 00:09:36.122 "rw_mbytes_per_sec": 0, 00:09:36.122 "r_mbytes_per_sec": 0, 00:09:36.122 "w_mbytes_per_sec": 0 00:09:36.122 }, 00:09:36.122 "claimed": false, 00:09:36.122 "zoned": false, 00:09:36.122 "supported_io_types": { 00:09:36.122 "read": true, 00:09:36.122 "write": true, 00:09:36.122 "unmap": true, 00:09:36.122 "flush": true, 00:09:36.122 "reset": true, 00:09:36.122 "nvme_admin": false, 00:09:36.122 "nvme_io": false, 00:09:36.122 "nvme_io_md": false, 00:09:36.122 "write_zeroes": true, 00:09:36.122 "zcopy": true, 00:09:36.122 "get_zone_info": false, 00:09:36.122 "zone_management": false, 00:09:36.122 "zone_append": false, 00:09:36.122 "compare": false, 00:09:36.122 "compare_and_write": false, 00:09:36.122 "abort": true, 00:09:36.382 "seek_hole": false, 00:09:36.382 "seek_data": false, 00:09:36.382 "copy": true, 00:09:36.382 "nvme_iov_md": false 00:09:36.382 }, 00:09:36.382 "memory_domains": [ 00:09:36.382 { 00:09:36.382 "dma_device_id": "system", 00:09:36.382 "dma_device_type": 1 00:09:36.382 }, 00:09:36.382 { 00:09:36.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.382 "dma_device_type": 2 00:09:36.382 } 00:09:36.382 ], 00:09:36.382 "driver_specific": {} 00:09:36.382 } 00:09:36.382 ] 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.382 [2024-12-06 23:43:47.693232] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.382 [2024-12-06 23:43:47.693378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.382 [2024-12-06 23:43:47.693422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.382 [2024-12-06 23:43:47.695553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.382 "name": "Existed_Raid", 00:09:36.382 "uuid": "15ef7e21-ea30-49ca-8ea0-1b67434c5bf4", 00:09:36.382 "strip_size_kb": 0, 00:09:36.382 "state": "configuring", 00:09:36.382 "raid_level": "raid1", 00:09:36.382 "superblock": true, 00:09:36.382 "num_base_bdevs": 3, 00:09:36.382 "num_base_bdevs_discovered": 2, 00:09:36.382 "num_base_bdevs_operational": 3, 00:09:36.382 "base_bdevs_list": [ 00:09:36.382 { 00:09:36.382 "name": "BaseBdev1", 00:09:36.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.382 "is_configured": false, 00:09:36.382 "data_offset": 0, 00:09:36.382 "data_size": 0 00:09:36.382 }, 00:09:36.382 { 00:09:36.382 "name": "BaseBdev2", 00:09:36.382 "uuid": "cda4d1c5-2bf8-4fb5-b45c-abb9f3b047f3", 00:09:36.382 "is_configured": true, 00:09:36.382 "data_offset": 2048, 00:09:36.382 "data_size": 63488 00:09:36.382 }, 00:09:36.382 { 00:09:36.382 "name": "BaseBdev3", 00:09:36.382 "uuid": "7b1a7357-fbf5-4202-a2ad-bd5449e7248e", 00:09:36.382 "is_configured": true, 00:09:36.382 "data_offset": 2048, 00:09:36.382 "data_size": 63488 00:09:36.382 } 00:09:36.382 ] 00:09:36.382 }' 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.382 23:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.643 [2024-12-06 23:43:48.180430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.643 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.903 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.903 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.903 "name": "Existed_Raid", 00:09:36.903 "uuid": "15ef7e21-ea30-49ca-8ea0-1b67434c5bf4", 00:09:36.903 "strip_size_kb": 0, 00:09:36.903 "state": "configuring", 00:09:36.903 "raid_level": "raid1", 00:09:36.903 "superblock": true, 00:09:36.903 "num_base_bdevs": 3, 00:09:36.903 "num_base_bdevs_discovered": 1, 00:09:36.903 "num_base_bdevs_operational": 3, 00:09:36.903 "base_bdevs_list": [ 00:09:36.903 { 00:09:36.903 "name": "BaseBdev1", 00:09:36.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.903 "is_configured": false, 00:09:36.903 "data_offset": 0, 00:09:36.903 "data_size": 0 00:09:36.903 }, 00:09:36.903 { 00:09:36.903 "name": null, 00:09:36.903 "uuid": "cda4d1c5-2bf8-4fb5-b45c-abb9f3b047f3", 00:09:36.903 "is_configured": false, 00:09:36.903 "data_offset": 0, 00:09:36.903 "data_size": 63488 00:09:36.903 }, 00:09:36.903 { 00:09:36.903 "name": "BaseBdev3", 00:09:36.903 "uuid": "7b1a7357-fbf5-4202-a2ad-bd5449e7248e", 00:09:36.903 "is_configured": true, 00:09:36.903 "data_offset": 2048, 00:09:36.903 "data_size": 63488 00:09:36.903 } 00:09:36.903 ] 00:09:36.903 }' 00:09:36.903 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.903 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.163 [2024-12-06 23:43:48.662532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.163 BaseBdev1 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.163 [ 00:09:37.163 { 00:09:37.163 "name": "BaseBdev1", 00:09:37.163 "aliases": [ 00:09:37.163 "d4f370c7-27d2-4bae-bbad-9e1e9bbc0cf9" 00:09:37.163 ], 00:09:37.163 "product_name": "Malloc disk", 00:09:37.163 "block_size": 512, 00:09:37.163 "num_blocks": 65536, 00:09:37.163 "uuid": "d4f370c7-27d2-4bae-bbad-9e1e9bbc0cf9", 00:09:37.163 "assigned_rate_limits": { 00:09:37.163 "rw_ios_per_sec": 0, 00:09:37.163 "rw_mbytes_per_sec": 0, 00:09:37.163 "r_mbytes_per_sec": 0, 00:09:37.163 "w_mbytes_per_sec": 0 00:09:37.163 }, 00:09:37.163 "claimed": true, 00:09:37.163 "claim_type": "exclusive_write", 00:09:37.163 "zoned": false, 00:09:37.163 "supported_io_types": { 00:09:37.163 "read": true, 00:09:37.163 "write": true, 00:09:37.163 "unmap": true, 00:09:37.163 "flush": true, 00:09:37.163 "reset": true, 00:09:37.163 "nvme_admin": false, 00:09:37.163 "nvme_io": false, 00:09:37.163 "nvme_io_md": false, 00:09:37.163 "write_zeroes": true, 00:09:37.163 "zcopy": true, 00:09:37.163 "get_zone_info": false, 00:09:37.163 "zone_management": false, 00:09:37.163 "zone_append": false, 00:09:37.163 "compare": false, 00:09:37.163 "compare_and_write": false, 00:09:37.163 "abort": true, 00:09:37.163 "seek_hole": false, 00:09:37.163 "seek_data": false, 00:09:37.163 "copy": true, 00:09:37.163 "nvme_iov_md": false 00:09:37.163 }, 00:09:37.163 "memory_domains": [ 00:09:37.163 { 00:09:37.163 "dma_device_id": "system", 00:09:37.163 "dma_device_type": 1 00:09:37.163 }, 00:09:37.163 { 00:09:37.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.163 "dma_device_type": 2 00:09:37.163 } 00:09:37.163 ], 00:09:37.163 "driver_specific": {} 00:09:37.163 } 00:09:37.163 ] 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.163 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.164 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.164 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.164 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.423 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.423 "name": "Existed_Raid", 00:09:37.423 "uuid": "15ef7e21-ea30-49ca-8ea0-1b67434c5bf4", 00:09:37.423 "strip_size_kb": 0, 00:09:37.423 "state": "configuring", 00:09:37.423 "raid_level": "raid1", 00:09:37.423 "superblock": true, 00:09:37.423 "num_base_bdevs": 3, 00:09:37.423 "num_base_bdevs_discovered": 2, 00:09:37.423 "num_base_bdevs_operational": 3, 00:09:37.423 "base_bdevs_list": [ 00:09:37.423 { 00:09:37.423 "name": "BaseBdev1", 00:09:37.423 "uuid": "d4f370c7-27d2-4bae-bbad-9e1e9bbc0cf9", 00:09:37.423 "is_configured": true, 00:09:37.423 "data_offset": 2048, 00:09:37.423 "data_size": 63488 00:09:37.423 }, 00:09:37.423 { 00:09:37.423 "name": null, 00:09:37.423 "uuid": "cda4d1c5-2bf8-4fb5-b45c-abb9f3b047f3", 00:09:37.423 "is_configured": false, 00:09:37.423 "data_offset": 0, 00:09:37.423 "data_size": 63488 00:09:37.423 }, 00:09:37.423 { 00:09:37.423 "name": "BaseBdev3", 00:09:37.423 "uuid": "7b1a7357-fbf5-4202-a2ad-bd5449e7248e", 00:09:37.423 "is_configured": true, 00:09:37.423 "data_offset": 2048, 00:09:37.423 "data_size": 63488 00:09:37.423 } 00:09:37.423 ] 00:09:37.423 }' 00:09:37.423 23:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.423 23:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.683 [2024-12-06 23:43:49.177714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.683 "name": "Existed_Raid", 00:09:37.683 "uuid": "15ef7e21-ea30-49ca-8ea0-1b67434c5bf4", 00:09:37.683 "strip_size_kb": 0, 00:09:37.683 "state": "configuring", 00:09:37.683 "raid_level": "raid1", 00:09:37.683 "superblock": true, 00:09:37.683 "num_base_bdevs": 3, 00:09:37.683 "num_base_bdevs_discovered": 1, 00:09:37.683 "num_base_bdevs_operational": 3, 00:09:37.683 "base_bdevs_list": [ 00:09:37.683 { 00:09:37.683 "name": "BaseBdev1", 00:09:37.683 "uuid": "d4f370c7-27d2-4bae-bbad-9e1e9bbc0cf9", 00:09:37.683 "is_configured": true, 00:09:37.683 "data_offset": 2048, 00:09:37.683 "data_size": 63488 00:09:37.683 }, 00:09:37.683 { 00:09:37.683 "name": null, 00:09:37.683 "uuid": "cda4d1c5-2bf8-4fb5-b45c-abb9f3b047f3", 00:09:37.683 "is_configured": false, 00:09:37.683 "data_offset": 0, 00:09:37.683 "data_size": 63488 00:09:37.683 }, 00:09:37.683 { 00:09:37.683 "name": null, 00:09:37.683 "uuid": "7b1a7357-fbf5-4202-a2ad-bd5449e7248e", 00:09:37.683 "is_configured": false, 00:09:37.683 "data_offset": 0, 00:09:37.683 "data_size": 63488 00:09:37.683 } 00:09:37.683 ] 00:09:37.683 }' 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.683 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.254 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.254 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.254 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.254 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.254 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.255 [2024-12-06 23:43:49.648946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.255 "name": "Existed_Raid", 00:09:38.255 "uuid": "15ef7e21-ea30-49ca-8ea0-1b67434c5bf4", 00:09:38.255 "strip_size_kb": 0, 00:09:38.255 "state": "configuring", 00:09:38.255 "raid_level": "raid1", 00:09:38.255 "superblock": true, 00:09:38.255 "num_base_bdevs": 3, 00:09:38.255 "num_base_bdevs_discovered": 2, 00:09:38.255 "num_base_bdevs_operational": 3, 00:09:38.255 "base_bdevs_list": [ 00:09:38.255 { 00:09:38.255 "name": "BaseBdev1", 00:09:38.255 "uuid": "d4f370c7-27d2-4bae-bbad-9e1e9bbc0cf9", 00:09:38.255 "is_configured": true, 00:09:38.255 "data_offset": 2048, 00:09:38.255 "data_size": 63488 00:09:38.255 }, 00:09:38.255 { 00:09:38.255 "name": null, 00:09:38.255 "uuid": "cda4d1c5-2bf8-4fb5-b45c-abb9f3b047f3", 00:09:38.255 "is_configured": false, 00:09:38.255 "data_offset": 0, 00:09:38.255 "data_size": 63488 00:09:38.255 }, 00:09:38.255 { 00:09:38.255 "name": "BaseBdev3", 00:09:38.255 "uuid": "7b1a7357-fbf5-4202-a2ad-bd5449e7248e", 00:09:38.255 "is_configured": true, 00:09:38.255 "data_offset": 2048, 00:09:38.255 "data_size": 63488 00:09:38.255 } 00:09:38.255 ] 00:09:38.255 }' 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.255 23:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.825 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.825 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.825 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.825 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.825 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.825 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:38.825 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.825 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.825 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.825 [2024-12-06 23:43:50.136118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.826 "name": "Existed_Raid", 00:09:38.826 "uuid": "15ef7e21-ea30-49ca-8ea0-1b67434c5bf4", 00:09:38.826 "strip_size_kb": 0, 00:09:38.826 "state": "configuring", 00:09:38.826 "raid_level": "raid1", 00:09:38.826 "superblock": true, 00:09:38.826 "num_base_bdevs": 3, 00:09:38.826 "num_base_bdevs_discovered": 1, 00:09:38.826 "num_base_bdevs_operational": 3, 00:09:38.826 "base_bdevs_list": [ 00:09:38.826 { 00:09:38.826 "name": null, 00:09:38.826 "uuid": "d4f370c7-27d2-4bae-bbad-9e1e9bbc0cf9", 00:09:38.826 "is_configured": false, 00:09:38.826 "data_offset": 0, 00:09:38.826 "data_size": 63488 00:09:38.826 }, 00:09:38.826 { 00:09:38.826 "name": null, 00:09:38.826 "uuid": "cda4d1c5-2bf8-4fb5-b45c-abb9f3b047f3", 00:09:38.826 "is_configured": false, 00:09:38.826 "data_offset": 0, 00:09:38.826 "data_size": 63488 00:09:38.826 }, 00:09:38.826 { 00:09:38.826 "name": "BaseBdev3", 00:09:38.826 "uuid": "7b1a7357-fbf5-4202-a2ad-bd5449e7248e", 00:09:38.826 "is_configured": true, 00:09:38.826 "data_offset": 2048, 00:09:38.826 "data_size": 63488 00:09:38.826 } 00:09:38.826 ] 00:09:38.826 }' 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.826 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.396 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.396 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.396 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.396 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.396 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.396 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:39.396 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:39.396 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.396 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.396 [2024-12-06 23:43:50.747729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.396 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.396 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.397 "name": "Existed_Raid", 00:09:39.397 "uuid": "15ef7e21-ea30-49ca-8ea0-1b67434c5bf4", 00:09:39.397 "strip_size_kb": 0, 00:09:39.397 "state": "configuring", 00:09:39.397 "raid_level": "raid1", 00:09:39.397 "superblock": true, 00:09:39.397 "num_base_bdevs": 3, 00:09:39.397 "num_base_bdevs_discovered": 2, 00:09:39.397 "num_base_bdevs_operational": 3, 00:09:39.397 "base_bdevs_list": [ 00:09:39.397 { 00:09:39.397 "name": null, 00:09:39.397 "uuid": "d4f370c7-27d2-4bae-bbad-9e1e9bbc0cf9", 00:09:39.397 "is_configured": false, 00:09:39.397 "data_offset": 0, 00:09:39.397 "data_size": 63488 00:09:39.397 }, 00:09:39.397 { 00:09:39.397 "name": "BaseBdev2", 00:09:39.397 "uuid": "cda4d1c5-2bf8-4fb5-b45c-abb9f3b047f3", 00:09:39.397 "is_configured": true, 00:09:39.397 "data_offset": 2048, 00:09:39.397 "data_size": 63488 00:09:39.397 }, 00:09:39.397 { 00:09:39.397 "name": "BaseBdev3", 00:09:39.397 "uuid": "7b1a7357-fbf5-4202-a2ad-bd5449e7248e", 00:09:39.397 "is_configured": true, 00:09:39.397 "data_offset": 2048, 00:09:39.397 "data_size": 63488 00:09:39.397 } 00:09:39.397 ] 00:09:39.397 }' 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.397 23:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.657 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.657 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.657 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.657 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.657 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d4f370c7-27d2-4bae-bbad-9e1e9bbc0cf9 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.917 [2024-12-06 23:43:51.324346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:39.917 [2024-12-06 23:43:51.324593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:39.917 [2024-12-06 23:43:51.324607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:39.917 [2024-12-06 23:43:51.324897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:39.917 [2024-12-06 23:43:51.325073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:39.917 [2024-12-06 23:43:51.325086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:39.917 [2024-12-06 23:43:51.325226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.917 NewBaseBdev 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.917 [ 00:09:39.917 { 00:09:39.917 "name": "NewBaseBdev", 00:09:39.917 "aliases": [ 00:09:39.917 "d4f370c7-27d2-4bae-bbad-9e1e9bbc0cf9" 00:09:39.917 ], 00:09:39.917 "product_name": "Malloc disk", 00:09:39.917 "block_size": 512, 00:09:39.917 "num_blocks": 65536, 00:09:39.917 "uuid": "d4f370c7-27d2-4bae-bbad-9e1e9bbc0cf9", 00:09:39.917 "assigned_rate_limits": { 00:09:39.917 "rw_ios_per_sec": 0, 00:09:39.917 "rw_mbytes_per_sec": 0, 00:09:39.917 "r_mbytes_per_sec": 0, 00:09:39.917 "w_mbytes_per_sec": 0 00:09:39.917 }, 00:09:39.917 "claimed": true, 00:09:39.917 "claim_type": "exclusive_write", 00:09:39.917 "zoned": false, 00:09:39.917 "supported_io_types": { 00:09:39.917 "read": true, 00:09:39.917 "write": true, 00:09:39.917 "unmap": true, 00:09:39.917 "flush": true, 00:09:39.917 "reset": true, 00:09:39.917 "nvme_admin": false, 00:09:39.917 "nvme_io": false, 00:09:39.917 "nvme_io_md": false, 00:09:39.917 "write_zeroes": true, 00:09:39.917 "zcopy": true, 00:09:39.917 "get_zone_info": false, 00:09:39.917 "zone_management": false, 00:09:39.917 "zone_append": false, 00:09:39.917 "compare": false, 00:09:39.917 "compare_and_write": false, 00:09:39.917 "abort": true, 00:09:39.917 "seek_hole": false, 00:09:39.917 "seek_data": false, 00:09:39.917 "copy": true, 00:09:39.917 "nvme_iov_md": false 00:09:39.917 }, 00:09:39.917 "memory_domains": [ 00:09:39.917 { 00:09:39.917 "dma_device_id": "system", 00:09:39.917 "dma_device_type": 1 00:09:39.917 }, 00:09:39.917 { 00:09:39.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.917 "dma_device_type": 2 00:09:39.917 } 00:09:39.917 ], 00:09:39.917 "driver_specific": {} 00:09:39.917 } 00:09:39.917 ] 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.917 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.918 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.918 "name": "Existed_Raid", 00:09:39.918 "uuid": "15ef7e21-ea30-49ca-8ea0-1b67434c5bf4", 00:09:39.918 "strip_size_kb": 0, 00:09:39.918 "state": "online", 00:09:39.918 "raid_level": "raid1", 00:09:39.918 "superblock": true, 00:09:39.918 "num_base_bdevs": 3, 00:09:39.918 "num_base_bdevs_discovered": 3, 00:09:39.918 "num_base_bdevs_operational": 3, 00:09:39.918 "base_bdevs_list": [ 00:09:39.918 { 00:09:39.918 "name": "NewBaseBdev", 00:09:39.918 "uuid": "d4f370c7-27d2-4bae-bbad-9e1e9bbc0cf9", 00:09:39.918 "is_configured": true, 00:09:39.918 "data_offset": 2048, 00:09:39.918 "data_size": 63488 00:09:39.918 }, 00:09:39.918 { 00:09:39.918 "name": "BaseBdev2", 00:09:39.918 "uuid": "cda4d1c5-2bf8-4fb5-b45c-abb9f3b047f3", 00:09:39.918 "is_configured": true, 00:09:39.918 "data_offset": 2048, 00:09:39.918 "data_size": 63488 00:09:39.918 }, 00:09:39.918 { 00:09:39.918 "name": "BaseBdev3", 00:09:39.918 "uuid": "7b1a7357-fbf5-4202-a2ad-bd5449e7248e", 00:09:39.918 "is_configured": true, 00:09:39.918 "data_offset": 2048, 00:09:39.918 "data_size": 63488 00:09:39.918 } 00:09:39.918 ] 00:09:39.918 }' 00:09:39.918 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.918 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.487 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:40.487 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:40.487 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.487 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.487 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.487 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.487 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:40.487 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.487 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.487 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.487 [2024-12-06 23:43:51.831862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.487 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.487 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.487 "name": "Existed_Raid", 00:09:40.487 "aliases": [ 00:09:40.487 "15ef7e21-ea30-49ca-8ea0-1b67434c5bf4" 00:09:40.487 ], 00:09:40.487 "product_name": "Raid Volume", 00:09:40.487 "block_size": 512, 00:09:40.487 "num_blocks": 63488, 00:09:40.487 "uuid": "15ef7e21-ea30-49ca-8ea0-1b67434c5bf4", 00:09:40.487 "assigned_rate_limits": { 00:09:40.487 "rw_ios_per_sec": 0, 00:09:40.487 "rw_mbytes_per_sec": 0, 00:09:40.487 "r_mbytes_per_sec": 0, 00:09:40.487 "w_mbytes_per_sec": 0 00:09:40.487 }, 00:09:40.487 "claimed": false, 00:09:40.487 "zoned": false, 00:09:40.487 "supported_io_types": { 00:09:40.487 "read": true, 00:09:40.487 "write": true, 00:09:40.487 "unmap": false, 00:09:40.487 "flush": false, 00:09:40.487 "reset": true, 00:09:40.487 "nvme_admin": false, 00:09:40.487 "nvme_io": false, 00:09:40.487 "nvme_io_md": false, 00:09:40.487 "write_zeroes": true, 00:09:40.487 "zcopy": false, 00:09:40.487 "get_zone_info": false, 00:09:40.487 "zone_management": false, 00:09:40.487 "zone_append": false, 00:09:40.487 "compare": false, 00:09:40.487 "compare_and_write": false, 00:09:40.487 "abort": false, 00:09:40.487 "seek_hole": false, 00:09:40.487 "seek_data": false, 00:09:40.487 "copy": false, 00:09:40.487 "nvme_iov_md": false 00:09:40.487 }, 00:09:40.487 "memory_domains": [ 00:09:40.487 { 00:09:40.487 "dma_device_id": "system", 00:09:40.487 "dma_device_type": 1 00:09:40.487 }, 00:09:40.487 { 00:09:40.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.487 "dma_device_type": 2 00:09:40.487 }, 00:09:40.488 { 00:09:40.488 "dma_device_id": "system", 00:09:40.488 "dma_device_type": 1 00:09:40.488 }, 00:09:40.488 { 00:09:40.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.488 "dma_device_type": 2 00:09:40.488 }, 00:09:40.488 { 00:09:40.488 "dma_device_id": "system", 00:09:40.488 "dma_device_type": 1 00:09:40.488 }, 00:09:40.488 { 00:09:40.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.488 "dma_device_type": 2 00:09:40.488 } 00:09:40.488 ], 00:09:40.488 "driver_specific": { 00:09:40.488 "raid": { 00:09:40.488 "uuid": "15ef7e21-ea30-49ca-8ea0-1b67434c5bf4", 00:09:40.488 "strip_size_kb": 0, 00:09:40.488 "state": "online", 00:09:40.488 "raid_level": "raid1", 00:09:40.488 "superblock": true, 00:09:40.488 "num_base_bdevs": 3, 00:09:40.488 "num_base_bdevs_discovered": 3, 00:09:40.488 "num_base_bdevs_operational": 3, 00:09:40.488 "base_bdevs_list": [ 00:09:40.488 { 00:09:40.488 "name": "NewBaseBdev", 00:09:40.488 "uuid": "d4f370c7-27d2-4bae-bbad-9e1e9bbc0cf9", 00:09:40.488 "is_configured": true, 00:09:40.488 "data_offset": 2048, 00:09:40.488 "data_size": 63488 00:09:40.488 }, 00:09:40.488 { 00:09:40.488 "name": "BaseBdev2", 00:09:40.488 "uuid": "cda4d1c5-2bf8-4fb5-b45c-abb9f3b047f3", 00:09:40.488 "is_configured": true, 00:09:40.488 "data_offset": 2048, 00:09:40.488 "data_size": 63488 00:09:40.488 }, 00:09:40.488 { 00:09:40.488 "name": "BaseBdev3", 00:09:40.488 "uuid": "7b1a7357-fbf5-4202-a2ad-bd5449e7248e", 00:09:40.488 "is_configured": true, 00:09:40.488 "data_offset": 2048, 00:09:40.488 "data_size": 63488 00:09:40.488 } 00:09:40.488 ] 00:09:40.488 } 00:09:40.488 } 00:09:40.488 }' 00:09:40.488 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.488 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:40.488 BaseBdev2 00:09:40.488 BaseBdev3' 00:09:40.488 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.488 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.488 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.488 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.488 23:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:40.488 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.488 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.488 23:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.488 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.746 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.746 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.746 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.747 [2024-12-06 23:43:52.095069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.747 [2024-12-06 23:43:52.095195] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.747 [2024-12-06 23:43:52.095296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.747 [2024-12-06 23:43:52.095636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.747 [2024-12-06 23:43:52.095704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67934 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67934 ']' 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67934 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67934 00:09:40.747 killing process with pid 67934 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67934' 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67934 00:09:40.747 [2024-12-06 23:43:52.143464] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.747 23:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67934 00:09:41.006 [2024-12-06 23:43:52.481477] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.387 23:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:42.387 00:09:42.387 real 0m10.835s 00:09:42.387 user 0m16.977s 00:09:42.387 sys 0m1.983s 00:09:42.387 23:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.387 ************************************ 00:09:42.387 END TEST raid_state_function_test_sb 00:09:42.387 ************************************ 00:09:42.387 23:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.387 23:43:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:42.387 23:43:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:42.387 23:43:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.387 23:43:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.387 ************************************ 00:09:42.387 START TEST raid_superblock_test 00:09:42.388 ************************************ 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68560 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68560 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68560 ']' 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.388 23:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.388 [2024-12-06 23:43:53.887730] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:09:42.388 [2024-12-06 23:43:53.887931] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68560 ] 00:09:42.648 [2024-12-06 23:43:54.039060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.648 [2024-12-06 23:43:54.174802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.908 [2024-12-06 23:43:54.408943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.908 [2024-12-06 23:43:54.409103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.169 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.429 malloc1 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.429 [2024-12-06 23:43:54.781805] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:43.429 [2024-12-06 23:43:54.781952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.429 [2024-12-06 23:43:54.781995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:43.429 [2024-12-06 23:43:54.782025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.429 [2024-12-06 23:43:54.784535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.429 [2024-12-06 23:43:54.784610] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:43.429 pt1 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.429 malloc2 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.429 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.430 [2024-12-06 23:43:54.843315] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:43.430 [2024-12-06 23:43:54.843439] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.430 [2024-12-06 23:43:54.843472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:43.430 [2024-12-06 23:43:54.843482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.430 [2024-12-06 23:43:54.845856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.430 [2024-12-06 23:43:54.845890] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:43.430 pt2 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.430 malloc3 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.430 [2024-12-06 23:43:54.918250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.430 [2024-12-06 23:43:54.918377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.430 [2024-12-06 23:43:54.918417] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:43.430 [2024-12-06 23:43:54.918447] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.430 [2024-12-06 23:43:54.920831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.430 [2024-12-06 23:43:54.920903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.430 pt3 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.430 [2024-12-06 23:43:54.930273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:43.430 [2024-12-06 23:43:54.932369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.430 [2024-12-06 23:43:54.932480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.430 [2024-12-06 23:43:54.932682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:43.430 [2024-12-06 23:43:54.932738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.430 [2024-12-06 23:43:54.932992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:43.430 [2024-12-06 23:43:54.933215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:43.430 [2024-12-06 23:43:54.933261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:43.430 [2024-12-06 23:43:54.933463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.430 "name": "raid_bdev1", 00:09:43.430 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:43.430 "strip_size_kb": 0, 00:09:43.430 "state": "online", 00:09:43.430 "raid_level": "raid1", 00:09:43.430 "superblock": true, 00:09:43.430 "num_base_bdevs": 3, 00:09:43.430 "num_base_bdevs_discovered": 3, 00:09:43.430 "num_base_bdevs_operational": 3, 00:09:43.430 "base_bdevs_list": [ 00:09:43.430 { 00:09:43.430 "name": "pt1", 00:09:43.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.430 "is_configured": true, 00:09:43.430 "data_offset": 2048, 00:09:43.430 "data_size": 63488 00:09:43.430 }, 00:09:43.430 { 00:09:43.430 "name": "pt2", 00:09:43.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.430 "is_configured": true, 00:09:43.430 "data_offset": 2048, 00:09:43.430 "data_size": 63488 00:09:43.430 }, 00:09:43.430 { 00:09:43.430 "name": "pt3", 00:09:43.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.430 "is_configured": true, 00:09:43.430 "data_offset": 2048, 00:09:43.430 "data_size": 63488 00:09:43.430 } 00:09:43.430 ] 00:09:43.430 }' 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.430 23:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.003 [2024-12-06 23:43:55.337973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.003 "name": "raid_bdev1", 00:09:44.003 "aliases": [ 00:09:44.003 "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845" 00:09:44.003 ], 00:09:44.003 "product_name": "Raid Volume", 00:09:44.003 "block_size": 512, 00:09:44.003 "num_blocks": 63488, 00:09:44.003 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:44.003 "assigned_rate_limits": { 00:09:44.003 "rw_ios_per_sec": 0, 00:09:44.003 "rw_mbytes_per_sec": 0, 00:09:44.003 "r_mbytes_per_sec": 0, 00:09:44.003 "w_mbytes_per_sec": 0 00:09:44.003 }, 00:09:44.003 "claimed": false, 00:09:44.003 "zoned": false, 00:09:44.003 "supported_io_types": { 00:09:44.003 "read": true, 00:09:44.003 "write": true, 00:09:44.003 "unmap": false, 00:09:44.003 "flush": false, 00:09:44.003 "reset": true, 00:09:44.003 "nvme_admin": false, 00:09:44.003 "nvme_io": false, 00:09:44.003 "nvme_io_md": false, 00:09:44.003 "write_zeroes": true, 00:09:44.003 "zcopy": false, 00:09:44.003 "get_zone_info": false, 00:09:44.003 "zone_management": false, 00:09:44.003 "zone_append": false, 00:09:44.003 "compare": false, 00:09:44.003 "compare_and_write": false, 00:09:44.003 "abort": false, 00:09:44.003 "seek_hole": false, 00:09:44.003 "seek_data": false, 00:09:44.003 "copy": false, 00:09:44.003 "nvme_iov_md": false 00:09:44.003 }, 00:09:44.003 "memory_domains": [ 00:09:44.003 { 00:09:44.003 "dma_device_id": "system", 00:09:44.003 "dma_device_type": 1 00:09:44.003 }, 00:09:44.003 { 00:09:44.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.003 "dma_device_type": 2 00:09:44.003 }, 00:09:44.003 { 00:09:44.003 "dma_device_id": "system", 00:09:44.003 "dma_device_type": 1 00:09:44.003 }, 00:09:44.003 { 00:09:44.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.003 "dma_device_type": 2 00:09:44.003 }, 00:09:44.003 { 00:09:44.003 "dma_device_id": "system", 00:09:44.003 "dma_device_type": 1 00:09:44.003 }, 00:09:44.003 { 00:09:44.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.003 "dma_device_type": 2 00:09:44.003 } 00:09:44.003 ], 00:09:44.003 "driver_specific": { 00:09:44.003 "raid": { 00:09:44.003 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:44.003 "strip_size_kb": 0, 00:09:44.003 "state": "online", 00:09:44.003 "raid_level": "raid1", 00:09:44.003 "superblock": true, 00:09:44.003 "num_base_bdevs": 3, 00:09:44.003 "num_base_bdevs_discovered": 3, 00:09:44.003 "num_base_bdevs_operational": 3, 00:09:44.003 "base_bdevs_list": [ 00:09:44.003 { 00:09:44.003 "name": "pt1", 00:09:44.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.003 "is_configured": true, 00:09:44.003 "data_offset": 2048, 00:09:44.003 "data_size": 63488 00:09:44.003 }, 00:09:44.003 { 00:09:44.003 "name": "pt2", 00:09:44.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.003 "is_configured": true, 00:09:44.003 "data_offset": 2048, 00:09:44.003 "data_size": 63488 00:09:44.003 }, 00:09:44.003 { 00:09:44.003 "name": "pt3", 00:09:44.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.003 "is_configured": true, 00:09:44.003 "data_offset": 2048, 00:09:44.003 "data_size": 63488 00:09:44.003 } 00:09:44.003 ] 00:09:44.003 } 00:09:44.003 } 00:09:44.003 }' 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:44.003 pt2 00:09:44.003 pt3' 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.003 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.263 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.263 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.263 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.263 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.263 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.263 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.263 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:44.263 [2024-12-06 23:43:55.609412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.263 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.263 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a9b74a83-bdbd-4dfc-88bf-34eb3ada6845 00:09:44.263 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a9b74a83-bdbd-4dfc-88bf-34eb3ada6845 ']' 00:09:44.263 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:44.263 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.264 [2024-12-06 23:43:55.657052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.264 [2024-12-06 23:43:55.657089] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.264 [2024-12-06 23:43:55.657181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.264 [2024-12-06 23:43:55.657266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.264 [2024-12-06 23:43:55.657276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.264 [2024-12-06 23:43:55.804851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:44.264 [2024-12-06 23:43:55.807133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:44.264 [2024-12-06 23:43:55.807195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:44.264 [2024-12-06 23:43:55.807253] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:44.264 [2024-12-06 23:43:55.807310] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:44.264 [2024-12-06 23:43:55.807328] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:44.264 [2024-12-06 23:43:55.807345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.264 [2024-12-06 23:43:55.807356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:44.264 request: 00:09:44.264 { 00:09:44.264 "name": "raid_bdev1", 00:09:44.264 "raid_level": "raid1", 00:09:44.264 "base_bdevs": [ 00:09:44.264 "malloc1", 00:09:44.264 "malloc2", 00:09:44.264 "malloc3" 00:09:44.264 ], 00:09:44.264 "superblock": false, 00:09:44.264 "method": "bdev_raid_create", 00:09:44.264 "req_id": 1 00:09:44.264 } 00:09:44.264 Got JSON-RPC error response 00:09:44.264 response: 00:09:44.264 { 00:09:44.264 "code": -17, 00:09:44.264 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:44.264 } 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:44.264 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.523 [2024-12-06 23:43:55.872693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:44.523 [2024-12-06 23:43:55.872797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.523 [2024-12-06 23:43:55.872834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:44.523 [2024-12-06 23:43:55.872863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.523 [2024-12-06 23:43:55.875367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.523 [2024-12-06 23:43:55.875438] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:44.523 [2024-12-06 23:43:55.875544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:44.523 [2024-12-06 23:43:55.875616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:44.523 pt1 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.523 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.523 "name": "raid_bdev1", 00:09:44.523 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:44.523 "strip_size_kb": 0, 00:09:44.523 "state": "configuring", 00:09:44.523 "raid_level": "raid1", 00:09:44.523 "superblock": true, 00:09:44.523 "num_base_bdevs": 3, 00:09:44.523 "num_base_bdevs_discovered": 1, 00:09:44.523 "num_base_bdevs_operational": 3, 00:09:44.523 "base_bdevs_list": [ 00:09:44.523 { 00:09:44.523 "name": "pt1", 00:09:44.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.523 "is_configured": true, 00:09:44.523 "data_offset": 2048, 00:09:44.523 "data_size": 63488 00:09:44.523 }, 00:09:44.523 { 00:09:44.523 "name": null, 00:09:44.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.523 "is_configured": false, 00:09:44.524 "data_offset": 2048, 00:09:44.524 "data_size": 63488 00:09:44.524 }, 00:09:44.524 { 00:09:44.524 "name": null, 00:09:44.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.524 "is_configured": false, 00:09:44.524 "data_offset": 2048, 00:09:44.524 "data_size": 63488 00:09:44.524 } 00:09:44.524 ] 00:09:44.524 }' 00:09:44.524 23:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.524 23:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.782 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:44.782 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.783 [2024-12-06 23:43:56.315979] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.783 [2024-12-06 23:43:56.316060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.783 [2024-12-06 23:43:56.316088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:44.783 [2024-12-06 23:43:56.316098] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.783 [2024-12-06 23:43:56.316610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.783 [2024-12-06 23:43:56.316629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.783 [2024-12-06 23:43:56.316736] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:44.783 [2024-12-06 23:43:56.316760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.783 pt2 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.783 [2024-12-06 23:43:56.327935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.783 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.041 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.041 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.041 "name": "raid_bdev1", 00:09:45.041 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:45.041 "strip_size_kb": 0, 00:09:45.041 "state": "configuring", 00:09:45.041 "raid_level": "raid1", 00:09:45.041 "superblock": true, 00:09:45.041 "num_base_bdevs": 3, 00:09:45.041 "num_base_bdevs_discovered": 1, 00:09:45.041 "num_base_bdevs_operational": 3, 00:09:45.041 "base_bdevs_list": [ 00:09:45.041 { 00:09:45.041 "name": "pt1", 00:09:45.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.041 "is_configured": true, 00:09:45.041 "data_offset": 2048, 00:09:45.041 "data_size": 63488 00:09:45.041 }, 00:09:45.041 { 00:09:45.041 "name": null, 00:09:45.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.041 "is_configured": false, 00:09:45.041 "data_offset": 0, 00:09:45.041 "data_size": 63488 00:09:45.041 }, 00:09:45.041 { 00:09:45.041 "name": null, 00:09:45.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.041 "is_configured": false, 00:09:45.041 "data_offset": 2048, 00:09:45.041 "data_size": 63488 00:09:45.041 } 00:09:45.041 ] 00:09:45.041 }' 00:09:45.041 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.041 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.301 [2024-12-06 23:43:56.739263] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:45.301 [2024-12-06 23:43:56.739450] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.301 [2024-12-06 23:43:56.739493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:45.301 [2024-12-06 23:43:56.739530] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.301 [2024-12-06 23:43:56.740111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.301 [2024-12-06 23:43:56.740207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:45.301 [2024-12-06 23:43:56.740334] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:45.301 [2024-12-06 23:43:56.740402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.301 pt2 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.301 [2024-12-06 23:43:56.751195] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:45.301 [2024-12-06 23:43:56.751281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.301 [2024-12-06 23:43:56.751310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:45.301 [2024-12-06 23:43:56.751339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.301 [2024-12-06 23:43:56.751772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.301 [2024-12-06 23:43:56.751832] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:45.301 [2024-12-06 23:43:56.751918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:45.301 [2024-12-06 23:43:56.751966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:45.301 [2024-12-06 23:43:56.752122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:45.301 [2024-12-06 23:43:56.752165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.301 [2024-12-06 23:43:56.752444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:45.301 [2024-12-06 23:43:56.752635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:45.301 [2024-12-06 23:43:56.752687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:45.301 [2024-12-06 23:43:56.752869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.301 pt3 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.301 "name": "raid_bdev1", 00:09:45.301 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:45.301 "strip_size_kb": 0, 00:09:45.301 "state": "online", 00:09:45.301 "raid_level": "raid1", 00:09:45.301 "superblock": true, 00:09:45.301 "num_base_bdevs": 3, 00:09:45.301 "num_base_bdevs_discovered": 3, 00:09:45.301 "num_base_bdevs_operational": 3, 00:09:45.301 "base_bdevs_list": [ 00:09:45.301 { 00:09:45.301 "name": "pt1", 00:09:45.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.301 "is_configured": true, 00:09:45.301 "data_offset": 2048, 00:09:45.301 "data_size": 63488 00:09:45.301 }, 00:09:45.301 { 00:09:45.301 "name": "pt2", 00:09:45.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.301 "is_configured": true, 00:09:45.301 "data_offset": 2048, 00:09:45.301 "data_size": 63488 00:09:45.301 }, 00:09:45.301 { 00:09:45.301 "name": "pt3", 00:09:45.301 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.301 "is_configured": true, 00:09:45.301 "data_offset": 2048, 00:09:45.301 "data_size": 63488 00:09:45.301 } 00:09:45.301 ] 00:09:45.301 }' 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.301 23:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.870 [2024-12-06 23:43:57.218820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.870 "name": "raid_bdev1", 00:09:45.870 "aliases": [ 00:09:45.870 "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845" 00:09:45.870 ], 00:09:45.870 "product_name": "Raid Volume", 00:09:45.870 "block_size": 512, 00:09:45.870 "num_blocks": 63488, 00:09:45.870 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:45.870 "assigned_rate_limits": { 00:09:45.870 "rw_ios_per_sec": 0, 00:09:45.870 "rw_mbytes_per_sec": 0, 00:09:45.870 "r_mbytes_per_sec": 0, 00:09:45.870 "w_mbytes_per_sec": 0 00:09:45.870 }, 00:09:45.870 "claimed": false, 00:09:45.870 "zoned": false, 00:09:45.870 "supported_io_types": { 00:09:45.870 "read": true, 00:09:45.870 "write": true, 00:09:45.870 "unmap": false, 00:09:45.870 "flush": false, 00:09:45.870 "reset": true, 00:09:45.870 "nvme_admin": false, 00:09:45.870 "nvme_io": false, 00:09:45.870 "nvme_io_md": false, 00:09:45.870 "write_zeroes": true, 00:09:45.870 "zcopy": false, 00:09:45.870 "get_zone_info": false, 00:09:45.870 "zone_management": false, 00:09:45.870 "zone_append": false, 00:09:45.870 "compare": false, 00:09:45.870 "compare_and_write": false, 00:09:45.870 "abort": false, 00:09:45.870 "seek_hole": false, 00:09:45.870 "seek_data": false, 00:09:45.870 "copy": false, 00:09:45.870 "nvme_iov_md": false 00:09:45.870 }, 00:09:45.870 "memory_domains": [ 00:09:45.870 { 00:09:45.870 "dma_device_id": "system", 00:09:45.870 "dma_device_type": 1 00:09:45.870 }, 00:09:45.870 { 00:09:45.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.870 "dma_device_type": 2 00:09:45.870 }, 00:09:45.870 { 00:09:45.870 "dma_device_id": "system", 00:09:45.870 "dma_device_type": 1 00:09:45.870 }, 00:09:45.870 { 00:09:45.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.870 "dma_device_type": 2 00:09:45.870 }, 00:09:45.870 { 00:09:45.870 "dma_device_id": "system", 00:09:45.870 "dma_device_type": 1 00:09:45.870 }, 00:09:45.870 { 00:09:45.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.870 "dma_device_type": 2 00:09:45.870 } 00:09:45.870 ], 00:09:45.870 "driver_specific": { 00:09:45.870 "raid": { 00:09:45.870 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:45.870 "strip_size_kb": 0, 00:09:45.870 "state": "online", 00:09:45.870 "raid_level": "raid1", 00:09:45.870 "superblock": true, 00:09:45.870 "num_base_bdevs": 3, 00:09:45.870 "num_base_bdevs_discovered": 3, 00:09:45.870 "num_base_bdevs_operational": 3, 00:09:45.870 "base_bdevs_list": [ 00:09:45.870 { 00:09:45.870 "name": "pt1", 00:09:45.870 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.870 "is_configured": true, 00:09:45.870 "data_offset": 2048, 00:09:45.870 "data_size": 63488 00:09:45.870 }, 00:09:45.870 { 00:09:45.870 "name": "pt2", 00:09:45.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.870 "is_configured": true, 00:09:45.870 "data_offset": 2048, 00:09:45.870 "data_size": 63488 00:09:45.870 }, 00:09:45.870 { 00:09:45.870 "name": "pt3", 00:09:45.870 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.870 "is_configured": true, 00:09:45.870 "data_offset": 2048, 00:09:45.870 "data_size": 63488 00:09:45.870 } 00:09:45.870 ] 00:09:45.870 } 00:09:45.870 } 00:09:45.870 }' 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:45.870 pt2 00:09:45.870 pt3' 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.870 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.130 [2024-12-06 23:43:57.502176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a9b74a83-bdbd-4dfc-88bf-34eb3ada6845 '!=' a9b74a83-bdbd-4dfc-88bf-34eb3ada6845 ']' 00:09:46.130 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.131 [2024-12-06 23:43:57.549882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.131 "name": "raid_bdev1", 00:09:46.131 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:46.131 "strip_size_kb": 0, 00:09:46.131 "state": "online", 00:09:46.131 "raid_level": "raid1", 00:09:46.131 "superblock": true, 00:09:46.131 "num_base_bdevs": 3, 00:09:46.131 "num_base_bdevs_discovered": 2, 00:09:46.131 "num_base_bdevs_operational": 2, 00:09:46.131 "base_bdevs_list": [ 00:09:46.131 { 00:09:46.131 "name": null, 00:09:46.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.131 "is_configured": false, 00:09:46.131 "data_offset": 0, 00:09:46.131 "data_size": 63488 00:09:46.131 }, 00:09:46.131 { 00:09:46.131 "name": "pt2", 00:09:46.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.131 "is_configured": true, 00:09:46.131 "data_offset": 2048, 00:09:46.131 "data_size": 63488 00:09:46.131 }, 00:09:46.131 { 00:09:46.131 "name": "pt3", 00:09:46.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.131 "is_configured": true, 00:09:46.131 "data_offset": 2048, 00:09:46.131 "data_size": 63488 00:09:46.131 } 00:09:46.131 ] 00:09:46.131 }' 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.131 23:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.701 [2024-12-06 23:43:58.025110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.701 [2024-12-06 23:43:58.025247] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.701 [2024-12-06 23:43:58.025376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.701 [2024-12-06 23:43:58.025469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.701 [2024-12-06 23:43:58.025534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.701 [2024-12-06 23:43:58.108866] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.701 [2024-12-06 23:43:58.108941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.701 [2024-12-06 23:43:58.108959] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:46.701 [2024-12-06 23:43:58.108971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.701 [2024-12-06 23:43:58.111548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.701 [2024-12-06 23:43:58.111654] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.701 [2024-12-06 23:43:58.111756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:46.701 [2024-12-06 23:43:58.111816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.701 pt2 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.701 "name": "raid_bdev1", 00:09:46.701 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:46.701 "strip_size_kb": 0, 00:09:46.701 "state": "configuring", 00:09:46.701 "raid_level": "raid1", 00:09:46.701 "superblock": true, 00:09:46.701 "num_base_bdevs": 3, 00:09:46.701 "num_base_bdevs_discovered": 1, 00:09:46.701 "num_base_bdevs_operational": 2, 00:09:46.701 "base_bdevs_list": [ 00:09:46.701 { 00:09:46.701 "name": null, 00:09:46.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.701 "is_configured": false, 00:09:46.701 "data_offset": 2048, 00:09:46.701 "data_size": 63488 00:09:46.701 }, 00:09:46.701 { 00:09:46.701 "name": "pt2", 00:09:46.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.701 "is_configured": true, 00:09:46.701 "data_offset": 2048, 00:09:46.701 "data_size": 63488 00:09:46.701 }, 00:09:46.701 { 00:09:46.701 "name": null, 00:09:46.701 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.701 "is_configured": false, 00:09:46.701 "data_offset": 2048, 00:09:46.701 "data_size": 63488 00:09:46.701 } 00:09:46.701 ] 00:09:46.701 }' 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.701 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.270 [2024-12-06 23:43:58.576168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:47.270 [2024-12-06 23:43:58.576339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.270 [2024-12-06 23:43:58.576382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:47.270 [2024-12-06 23:43:58.576414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.270 [2024-12-06 23:43:58.577025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.270 [2024-12-06 23:43:58.577099] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:47.270 [2024-12-06 23:43:58.577247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:47.270 [2024-12-06 23:43:58.577308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.270 [2024-12-06 23:43:58.577464] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:47.270 [2024-12-06 23:43:58.577504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:47.270 [2024-12-06 23:43:58.577823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:47.270 [2024-12-06 23:43:58.578040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:47.270 [2024-12-06 23:43:58.578083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:47.270 [2024-12-06 23:43:58.578284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.270 pt3 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.270 "name": "raid_bdev1", 00:09:47.270 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:47.270 "strip_size_kb": 0, 00:09:47.270 "state": "online", 00:09:47.270 "raid_level": "raid1", 00:09:47.270 "superblock": true, 00:09:47.270 "num_base_bdevs": 3, 00:09:47.270 "num_base_bdevs_discovered": 2, 00:09:47.270 "num_base_bdevs_operational": 2, 00:09:47.270 "base_bdevs_list": [ 00:09:47.270 { 00:09:47.270 "name": null, 00:09:47.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.270 "is_configured": false, 00:09:47.270 "data_offset": 2048, 00:09:47.270 "data_size": 63488 00:09:47.270 }, 00:09:47.270 { 00:09:47.270 "name": "pt2", 00:09:47.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.270 "is_configured": true, 00:09:47.270 "data_offset": 2048, 00:09:47.270 "data_size": 63488 00:09:47.270 }, 00:09:47.270 { 00:09:47.270 "name": "pt3", 00:09:47.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.270 "is_configured": true, 00:09:47.270 "data_offset": 2048, 00:09:47.270 "data_size": 63488 00:09:47.270 } 00:09:47.270 ] 00:09:47.270 }' 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.270 23:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.530 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.530 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.530 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.530 [2024-12-06 23:43:59.043321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.530 [2024-12-06 23:43:59.043468] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.530 [2024-12-06 23:43:59.043591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.530 [2024-12-06 23:43:59.043696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.530 [2024-12-06 23:43:59.043746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:47.530 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.530 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.530 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.530 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.530 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:47.530 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.789 [2024-12-06 23:43:59.119173] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.789 [2024-12-06 23:43:59.119247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.789 [2024-12-06 23:43:59.119267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:47.789 [2024-12-06 23:43:59.119278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.789 [2024-12-06 23:43:59.121857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.789 [2024-12-06 23:43:59.121893] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.789 [2024-12-06 23:43:59.121981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:47.789 [2024-12-06 23:43:59.122033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.789 [2024-12-06 23:43:59.122179] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:47.789 [2024-12-06 23:43:59.122189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.789 [2024-12-06 23:43:59.122206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:47.789 [2024-12-06 23:43:59.122275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.789 pt1 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.789 "name": "raid_bdev1", 00:09:47.789 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:47.789 "strip_size_kb": 0, 00:09:47.789 "state": "configuring", 00:09:47.789 "raid_level": "raid1", 00:09:47.789 "superblock": true, 00:09:47.789 "num_base_bdevs": 3, 00:09:47.789 "num_base_bdevs_discovered": 1, 00:09:47.789 "num_base_bdevs_operational": 2, 00:09:47.789 "base_bdevs_list": [ 00:09:47.789 { 00:09:47.789 "name": null, 00:09:47.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.789 "is_configured": false, 00:09:47.789 "data_offset": 2048, 00:09:47.789 "data_size": 63488 00:09:47.789 }, 00:09:47.789 { 00:09:47.789 "name": "pt2", 00:09:47.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.789 "is_configured": true, 00:09:47.789 "data_offset": 2048, 00:09:47.789 "data_size": 63488 00:09:47.789 }, 00:09:47.789 { 00:09:47.789 "name": null, 00:09:47.789 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.789 "is_configured": false, 00:09:47.789 "data_offset": 2048, 00:09:47.789 "data_size": 63488 00:09:47.789 } 00:09:47.789 ] 00:09:47.789 }' 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.789 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.048 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:48.048 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:48.048 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.048 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.048 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.307 [2024-12-06 23:43:59.618457] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:48.307 [2024-12-06 23:43:59.618562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.307 [2024-12-06 23:43:59.618593] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:48.307 [2024-12-06 23:43:59.618603] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.307 [2024-12-06 23:43:59.619275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.307 [2024-12-06 23:43:59.619304] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:48.307 [2024-12-06 23:43:59.619413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:48.307 [2024-12-06 23:43:59.619444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:48.307 [2024-12-06 23:43:59.619606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:48.307 [2024-12-06 23:43:59.619616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.307 [2024-12-06 23:43:59.619930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:48.307 [2024-12-06 23:43:59.620116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:48.307 [2024-12-06 23:43:59.620133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:48.307 [2024-12-06 23:43:59.620291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.307 pt3 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.307 "name": "raid_bdev1", 00:09:48.307 "uuid": "a9b74a83-bdbd-4dfc-88bf-34eb3ada6845", 00:09:48.307 "strip_size_kb": 0, 00:09:48.307 "state": "online", 00:09:48.307 "raid_level": "raid1", 00:09:48.307 "superblock": true, 00:09:48.307 "num_base_bdevs": 3, 00:09:48.307 "num_base_bdevs_discovered": 2, 00:09:48.307 "num_base_bdevs_operational": 2, 00:09:48.307 "base_bdevs_list": [ 00:09:48.307 { 00:09:48.307 "name": null, 00:09:48.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.307 "is_configured": false, 00:09:48.307 "data_offset": 2048, 00:09:48.307 "data_size": 63488 00:09:48.307 }, 00:09:48.307 { 00:09:48.307 "name": "pt2", 00:09:48.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.307 "is_configured": true, 00:09:48.307 "data_offset": 2048, 00:09:48.307 "data_size": 63488 00:09:48.307 }, 00:09:48.307 { 00:09:48.307 "name": "pt3", 00:09:48.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.307 "is_configured": true, 00:09:48.307 "data_offset": 2048, 00:09:48.307 "data_size": 63488 00:09:48.307 } 00:09:48.307 ] 00:09:48.307 }' 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.307 23:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.565 23:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:48.565 23:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:48.565 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.565 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.565 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.565 23:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:48.565 23:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.565 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.565 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.565 23:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:48.565 [2024-12-06 23:44:00.097881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.565 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.824 23:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a9b74a83-bdbd-4dfc-88bf-34eb3ada6845 '!=' a9b74a83-bdbd-4dfc-88bf-34eb3ada6845 ']' 00:09:48.824 23:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68560 00:09:48.824 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68560 ']' 00:09:48.824 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68560 00:09:48.824 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:48.824 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.824 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68560 00:09:48.824 killing process with pid 68560 00:09:48.824 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.824 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.824 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68560' 00:09:48.824 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68560 00:09:48.824 [2024-12-06 23:44:00.181117] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.824 [2024-12-06 23:44:00.181249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.824 23:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68560 00:09:48.824 [2024-12-06 23:44:00.181322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.824 [2024-12-06 23:44:00.181342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:49.082 [2024-12-06 23:44:00.515617] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:50.460 23:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:50.460 00:09:50.460 real 0m7.944s 00:09:50.460 user 0m12.271s 00:09:50.460 sys 0m1.442s 00:09:50.460 ************************************ 00:09:50.460 END TEST raid_superblock_test 00:09:50.460 ************************************ 00:09:50.460 23:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.460 23:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.460 23:44:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:50.460 23:44:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:50.461 23:44:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.461 23:44:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.461 ************************************ 00:09:50.461 START TEST raid_read_error_test 00:09:50.461 ************************************ 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xe4MJvbUfT 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69006 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69006 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69006 ']' 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.461 23:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.461 [2024-12-06 23:44:01.924822] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:09:50.461 [2024-12-06 23:44:01.925041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69006 ] 00:09:50.720 [2024-12-06 23:44:02.105847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.720 [2024-12-06 23:44:02.240174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.980 [2024-12-06 23:44:02.475946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.980 [2024-12-06 23:44:02.476068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.239 BaseBdev1_malloc 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.239 true 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.239 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.538 [2024-12-06 23:44:02.803113] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:51.538 [2024-12-06 23:44:02.803186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.538 [2024-12-06 23:44:02.803208] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:51.538 [2024-12-06 23:44:02.803221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.538 [2024-12-06 23:44:02.805882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.538 [2024-12-06 23:44:02.806006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:51.538 BaseBdev1 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.538 BaseBdev2_malloc 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.538 true 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.538 [2024-12-06 23:44:02.877884] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:51.538 [2024-12-06 23:44:02.877952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.538 [2024-12-06 23:44:02.877969] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:51.538 [2024-12-06 23:44:02.877981] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.538 [2024-12-06 23:44:02.880411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.538 [2024-12-06 23:44:02.880450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:51.538 BaseBdev2 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.538 BaseBdev3_malloc 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.538 true 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.538 [2024-12-06 23:44:02.970425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:51.538 [2024-12-06 23:44:02.970487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.538 [2024-12-06 23:44:02.970504] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:51.538 [2024-12-06 23:44:02.970515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.538 [2024-12-06 23:44:02.973063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.538 [2024-12-06 23:44:02.973139] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:51.538 BaseBdev3 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.538 [2024-12-06 23:44:02.982485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.538 [2024-12-06 23:44:02.984655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.538 [2024-12-06 23:44:02.984743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.538 [2024-12-06 23:44:02.984956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:51.538 [2024-12-06 23:44:02.984969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:51.538 [2024-12-06 23:44:02.985222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:51.538 [2024-12-06 23:44:02.985398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:51.538 [2024-12-06 23:44:02.985410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:51.538 [2024-12-06 23:44:02.985564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.538 23:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.538 23:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.538 23:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.538 "name": "raid_bdev1", 00:09:51.538 "uuid": "3eee085e-deca-47a3-867c-4c2db06561cc", 00:09:51.538 "strip_size_kb": 0, 00:09:51.538 "state": "online", 00:09:51.538 "raid_level": "raid1", 00:09:51.538 "superblock": true, 00:09:51.538 "num_base_bdevs": 3, 00:09:51.538 "num_base_bdevs_discovered": 3, 00:09:51.538 "num_base_bdevs_operational": 3, 00:09:51.538 "base_bdevs_list": [ 00:09:51.538 { 00:09:51.538 "name": "BaseBdev1", 00:09:51.538 "uuid": "695c38a8-ebba-58cb-aa68-ae29ff676434", 00:09:51.538 "is_configured": true, 00:09:51.538 "data_offset": 2048, 00:09:51.538 "data_size": 63488 00:09:51.538 }, 00:09:51.538 { 00:09:51.538 "name": "BaseBdev2", 00:09:51.538 "uuid": "9b77bbf2-28c5-571b-8361-79254daa7e11", 00:09:51.538 "is_configured": true, 00:09:51.538 "data_offset": 2048, 00:09:51.538 "data_size": 63488 00:09:51.538 }, 00:09:51.538 { 00:09:51.538 "name": "BaseBdev3", 00:09:51.538 "uuid": "25006e34-ff85-5c08-8a0a-be3b716b5c6e", 00:09:51.538 "is_configured": true, 00:09:51.538 "data_offset": 2048, 00:09:51.538 "data_size": 63488 00:09:51.538 } 00:09:51.538 ] 00:09:51.538 }' 00:09:51.538 23:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.538 23:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.103 23:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:52.103 23:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:52.103 [2024-12-06 23:44:03.487125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.038 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.039 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.039 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.039 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.039 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.039 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.039 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.039 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.039 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.039 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.039 "name": "raid_bdev1", 00:09:53.039 "uuid": "3eee085e-deca-47a3-867c-4c2db06561cc", 00:09:53.039 "strip_size_kb": 0, 00:09:53.039 "state": "online", 00:09:53.039 "raid_level": "raid1", 00:09:53.039 "superblock": true, 00:09:53.039 "num_base_bdevs": 3, 00:09:53.039 "num_base_bdevs_discovered": 3, 00:09:53.039 "num_base_bdevs_operational": 3, 00:09:53.039 "base_bdevs_list": [ 00:09:53.039 { 00:09:53.039 "name": "BaseBdev1", 00:09:53.039 "uuid": "695c38a8-ebba-58cb-aa68-ae29ff676434", 00:09:53.039 "is_configured": true, 00:09:53.039 "data_offset": 2048, 00:09:53.039 "data_size": 63488 00:09:53.039 }, 00:09:53.039 { 00:09:53.039 "name": "BaseBdev2", 00:09:53.039 "uuid": "9b77bbf2-28c5-571b-8361-79254daa7e11", 00:09:53.039 "is_configured": true, 00:09:53.039 "data_offset": 2048, 00:09:53.039 "data_size": 63488 00:09:53.039 }, 00:09:53.039 { 00:09:53.039 "name": "BaseBdev3", 00:09:53.039 "uuid": "25006e34-ff85-5c08-8a0a-be3b716b5c6e", 00:09:53.039 "is_configured": true, 00:09:53.039 "data_offset": 2048, 00:09:53.039 "data_size": 63488 00:09:53.039 } 00:09:53.039 ] 00:09:53.039 }' 00:09:53.039 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.039 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.298 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:53.299 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.299 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.299 [2024-12-06 23:44:04.812168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.299 [2024-12-06 23:44:04.812309] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.299 [2024-12-06 23:44:04.815007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.299 [2024-12-06 23:44:04.815103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.299 [2024-12-06 23:44:04.815235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.299 [2024-12-06 23:44:04.815283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:53.299 { 00:09:53.299 "results": [ 00:09:53.299 { 00:09:53.299 "job": "raid_bdev1", 00:09:53.299 "core_mask": "0x1", 00:09:53.299 "workload": "randrw", 00:09:53.299 "percentage": 50, 00:09:53.299 "status": "finished", 00:09:53.299 "queue_depth": 1, 00:09:53.299 "io_size": 131072, 00:09:53.299 "runtime": 1.325817, 00:09:53.299 "iops": 10106.975547907441, 00:09:53.299 "mibps": 1263.3719434884301, 00:09:53.299 "io_failed": 0, 00:09:53.299 "io_timeout": 0, 00:09:53.299 "avg_latency_us": 96.35982324186925, 00:09:53.299 "min_latency_us": 22.91703056768559, 00:09:53.299 "max_latency_us": 1509.6174672489083 00:09:53.299 } 00:09:53.299 ], 00:09:53.299 "core_count": 1 00:09:53.299 } 00:09:53.299 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.299 23:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69006 00:09:53.299 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69006 ']' 00:09:53.299 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69006 00:09:53.299 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:53.299 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.299 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69006 00:09:53.299 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.559 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.559 killing process with pid 69006 00:09:53.559 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69006' 00:09:53.559 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69006 00:09:53.559 [2024-12-06 23:44:04.860959] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.559 23:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69006 00:09:53.559 [2024-12-06 23:44:05.108253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.939 23:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xe4MJvbUfT 00:09:54.939 23:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:54.939 23:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:54.939 23:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:54.939 23:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:54.939 ************************************ 00:09:54.939 END TEST raid_read_error_test 00:09:54.939 ************************************ 00:09:54.939 23:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:54.939 23:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:54.939 23:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:54.939 00:09:54.939 real 0m4.579s 00:09:54.939 user 0m5.217s 00:09:54.939 sys 0m0.665s 00:09:54.939 23:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.939 23:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.939 23:44:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:54.939 23:44:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:54.939 23:44:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.939 23:44:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.939 ************************************ 00:09:54.939 START TEST raid_write_error_test 00:09:54.939 ************************************ 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zP3xdhABFS 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69146 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69146 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69146 ']' 00:09:54.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.939 23:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.199 [2024-12-06 23:44:06.563346] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:09:55.199 [2024-12-06 23:44:06.563462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69146 ] 00:09:55.199 [2024-12-06 23:44:06.733997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.458 [2024-12-06 23:44:06.873848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.718 [2024-12-06 23:44:07.115096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.718 [2024-12-06 23:44:07.115167] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.979 BaseBdev1_malloc 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.979 true 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.979 [2024-12-06 23:44:07.446569] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:55.979 [2024-12-06 23:44:07.446638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.979 [2024-12-06 23:44:07.446673] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:55.979 [2024-12-06 23:44:07.446686] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.979 [2024-12-06 23:44:07.449068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.979 [2024-12-06 23:44:07.449108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:55.979 BaseBdev1 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.979 BaseBdev2_malloc 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.979 true 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.979 [2024-12-06 23:44:07.521166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:55.979 [2024-12-06 23:44:07.521230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.979 [2024-12-06 23:44:07.521247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:55.979 [2024-12-06 23:44:07.521258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.979 [2024-12-06 23:44:07.523614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.979 [2024-12-06 23:44:07.523652] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:55.979 BaseBdev2 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.979 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.239 BaseBdev3_malloc 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.239 true 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.239 [2024-12-06 23:44:07.606720] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:56.239 [2024-12-06 23:44:07.606784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.239 [2024-12-06 23:44:07.606805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:56.239 [2024-12-06 23:44:07.606817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.239 [2024-12-06 23:44:07.609256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.239 [2024-12-06 23:44:07.609372] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:56.239 BaseBdev3 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.239 [2024-12-06 23:44:07.618780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.239 [2024-12-06 23:44:07.620910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.239 [2024-12-06 23:44:07.620983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.239 [2024-12-06 23:44:07.621191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:56.239 [2024-12-06 23:44:07.621203] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:56.239 [2024-12-06 23:44:07.621454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:56.239 [2024-12-06 23:44:07.621632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:56.239 [2024-12-06 23:44:07.621645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:56.239 [2024-12-06 23:44:07.621823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.239 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.240 "name": "raid_bdev1", 00:09:56.240 "uuid": "d2ff0533-3fa4-4b33-b46c-22d32a7414a4", 00:09:56.240 "strip_size_kb": 0, 00:09:56.240 "state": "online", 00:09:56.240 "raid_level": "raid1", 00:09:56.240 "superblock": true, 00:09:56.240 "num_base_bdevs": 3, 00:09:56.240 "num_base_bdevs_discovered": 3, 00:09:56.240 "num_base_bdevs_operational": 3, 00:09:56.240 "base_bdevs_list": [ 00:09:56.240 { 00:09:56.240 "name": "BaseBdev1", 00:09:56.240 "uuid": "e75346d8-192c-5d60-9726-57e7499deada", 00:09:56.240 "is_configured": true, 00:09:56.240 "data_offset": 2048, 00:09:56.240 "data_size": 63488 00:09:56.240 }, 00:09:56.240 { 00:09:56.240 "name": "BaseBdev2", 00:09:56.240 "uuid": "b2fe4d5a-2c51-57cb-9b0f-cece7663643d", 00:09:56.240 "is_configured": true, 00:09:56.240 "data_offset": 2048, 00:09:56.240 "data_size": 63488 00:09:56.240 }, 00:09:56.240 { 00:09:56.240 "name": "BaseBdev3", 00:09:56.240 "uuid": "4b54f79e-8ca0-54fa-941c-bcaa3be95e3f", 00:09:56.240 "is_configured": true, 00:09:56.240 "data_offset": 2048, 00:09:56.240 "data_size": 63488 00:09:56.240 } 00:09:56.240 ] 00:09:56.240 }' 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.240 23:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.500 23:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:56.500 23:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:56.759 [2024-12-06 23:44:08.135473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.698 [2024-12-06 23:44:09.067146] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:57.698 [2024-12-06 23:44:09.067319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.698 [2024-12-06 23:44:09.067586] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.698 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.699 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.699 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.699 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.699 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.699 "name": "raid_bdev1", 00:09:57.699 "uuid": "d2ff0533-3fa4-4b33-b46c-22d32a7414a4", 00:09:57.699 "strip_size_kb": 0, 00:09:57.699 "state": "online", 00:09:57.699 "raid_level": "raid1", 00:09:57.699 "superblock": true, 00:09:57.699 "num_base_bdevs": 3, 00:09:57.699 "num_base_bdevs_discovered": 2, 00:09:57.699 "num_base_bdevs_operational": 2, 00:09:57.699 "base_bdevs_list": [ 00:09:57.699 { 00:09:57.699 "name": null, 00:09:57.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.699 "is_configured": false, 00:09:57.699 "data_offset": 0, 00:09:57.699 "data_size": 63488 00:09:57.699 }, 00:09:57.699 { 00:09:57.699 "name": "BaseBdev2", 00:09:57.699 "uuid": "b2fe4d5a-2c51-57cb-9b0f-cece7663643d", 00:09:57.699 "is_configured": true, 00:09:57.699 "data_offset": 2048, 00:09:57.699 "data_size": 63488 00:09:57.699 }, 00:09:57.699 { 00:09:57.699 "name": "BaseBdev3", 00:09:57.699 "uuid": "4b54f79e-8ca0-54fa-941c-bcaa3be95e3f", 00:09:57.699 "is_configured": true, 00:09:57.699 "data_offset": 2048, 00:09:57.699 "data_size": 63488 00:09:57.699 } 00:09:57.699 ] 00:09:57.699 }' 00:09:57.699 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.699 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.268 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.268 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.268 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.268 [2024-12-06 23:44:09.546820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.268 [2024-12-06 23:44:09.546983] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.268 [2024-12-06 23:44:09.549656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.268 [2024-12-06 23:44:09.549771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.268 [2024-12-06 23:44:09.549877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.269 [2024-12-06 23:44:09.549930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:58.269 { 00:09:58.269 "results": [ 00:09:58.269 { 00:09:58.269 "job": "raid_bdev1", 00:09:58.269 "core_mask": "0x1", 00:09:58.269 "workload": "randrw", 00:09:58.269 "percentage": 50, 00:09:58.269 "status": "finished", 00:09:58.269 "queue_depth": 1, 00:09:58.269 "io_size": 131072, 00:09:58.269 "runtime": 1.412078, 00:09:58.269 "iops": 11287.620088975254, 00:09:58.269 "mibps": 1410.9525111219068, 00:09:58.269 "io_failed": 0, 00:09:58.269 "io_timeout": 0, 00:09:58.269 "avg_latency_us": 85.94872755875224, 00:09:58.269 "min_latency_us": 23.699563318777294, 00:09:58.269 "max_latency_us": 1552.5449781659388 00:09:58.269 } 00:09:58.269 ], 00:09:58.269 "core_count": 1 00:09:58.269 } 00:09:58.269 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.269 23:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69146 00:09:58.269 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69146 ']' 00:09:58.269 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69146 00:09:58.269 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:58.269 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.269 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69146 00:09:58.269 killing process with pid 69146 00:09:58.269 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.269 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.269 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69146' 00:09:58.269 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69146 00:09:58.269 [2024-12-06 23:44:09.591999] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.269 23:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69146 00:09:58.528 [2024-12-06 23:44:09.849231] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.909 23:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zP3xdhABFS 00:09:59.909 23:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:59.909 23:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:59.909 23:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:59.909 23:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:59.909 23:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.909 23:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:59.909 ************************************ 00:09:59.909 END TEST raid_write_error_test 00:09:59.909 ************************************ 00:09:59.909 23:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:59.909 00:09:59.909 real 0m4.676s 00:09:59.909 user 0m5.411s 00:09:59.909 sys 0m0.648s 00:09:59.909 23:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.909 23:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.909 23:44:11 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:59.909 23:44:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:59.909 23:44:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:59.909 23:44:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:59.909 23:44:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.909 23:44:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.909 ************************************ 00:09:59.909 START TEST raid_state_function_test 00:09:59.909 ************************************ 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:59.909 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69296 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69296' 00:09:59.910 Process raid pid: 69296 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69296 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69296 ']' 00:09:59.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.910 23:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.910 [2024-12-06 23:44:11.289875] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:09:59.910 [2024-12-06 23:44:11.290084] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.910 [2024-12-06 23:44:11.465482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.169 [2024-12-06 23:44:11.607422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.429 [2024-12-06 23:44:11.845154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.429 [2024-12-06 23:44:11.845299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.689 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.689 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:00.689 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:00.689 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.689 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.689 [2024-12-06 23:44:12.120682] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.689 [2024-12-06 23:44:12.120823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.689 [2024-12-06 23:44:12.120860] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.689 [2024-12-06 23:44:12.120883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.689 [2024-12-06 23:44:12.120900] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.689 [2024-12-06 23:44:12.120920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.689 [2024-12-06 23:44:12.120936] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:00.689 [2024-12-06 23:44:12.120955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:00.689 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.689 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.689 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.689 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.689 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.689 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.690 "name": "Existed_Raid", 00:10:00.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.690 "strip_size_kb": 64, 00:10:00.690 "state": "configuring", 00:10:00.690 "raid_level": "raid0", 00:10:00.690 "superblock": false, 00:10:00.690 "num_base_bdevs": 4, 00:10:00.690 "num_base_bdevs_discovered": 0, 00:10:00.690 "num_base_bdevs_operational": 4, 00:10:00.690 "base_bdevs_list": [ 00:10:00.690 { 00:10:00.690 "name": "BaseBdev1", 00:10:00.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.690 "is_configured": false, 00:10:00.690 "data_offset": 0, 00:10:00.690 "data_size": 0 00:10:00.690 }, 00:10:00.690 { 00:10:00.690 "name": "BaseBdev2", 00:10:00.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.690 "is_configured": false, 00:10:00.690 "data_offset": 0, 00:10:00.690 "data_size": 0 00:10:00.690 }, 00:10:00.690 { 00:10:00.690 "name": "BaseBdev3", 00:10:00.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.690 "is_configured": false, 00:10:00.690 "data_offset": 0, 00:10:00.690 "data_size": 0 00:10:00.690 }, 00:10:00.690 { 00:10:00.690 "name": "BaseBdev4", 00:10:00.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.690 "is_configured": false, 00:10:00.690 "data_offset": 0, 00:10:00.690 "data_size": 0 00:10:00.690 } 00:10:00.690 ] 00:10:00.690 }' 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.690 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.260 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.260 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.260 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.260 [2024-12-06 23:44:12.571889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.260 [2024-12-06 23:44:12.572040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:01.260 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.260 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:01.260 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.260 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.260 [2024-12-06 23:44:12.583874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.260 [2024-12-06 23:44:12.583970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.260 [2024-12-06 23:44:12.584003] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.260 [2024-12-06 23:44:12.584031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.260 [2024-12-06 23:44:12.584052] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.260 [2024-12-06 23:44:12.584077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.260 [2024-12-06 23:44:12.584098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:01.260 [2024-12-06 23:44:12.584133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:01.260 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.260 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.260 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.260 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.261 [2024-12-06 23:44:12.639277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.261 BaseBdev1 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.261 [ 00:10:01.261 { 00:10:01.261 "name": "BaseBdev1", 00:10:01.261 "aliases": [ 00:10:01.261 "478b2381-029b-4aed-bcdf-d85cf10ac01f" 00:10:01.261 ], 00:10:01.261 "product_name": "Malloc disk", 00:10:01.261 "block_size": 512, 00:10:01.261 "num_blocks": 65536, 00:10:01.261 "uuid": "478b2381-029b-4aed-bcdf-d85cf10ac01f", 00:10:01.261 "assigned_rate_limits": { 00:10:01.261 "rw_ios_per_sec": 0, 00:10:01.261 "rw_mbytes_per_sec": 0, 00:10:01.261 "r_mbytes_per_sec": 0, 00:10:01.261 "w_mbytes_per_sec": 0 00:10:01.261 }, 00:10:01.261 "claimed": true, 00:10:01.261 "claim_type": "exclusive_write", 00:10:01.261 "zoned": false, 00:10:01.261 "supported_io_types": { 00:10:01.261 "read": true, 00:10:01.261 "write": true, 00:10:01.261 "unmap": true, 00:10:01.261 "flush": true, 00:10:01.261 "reset": true, 00:10:01.261 "nvme_admin": false, 00:10:01.261 "nvme_io": false, 00:10:01.261 "nvme_io_md": false, 00:10:01.261 "write_zeroes": true, 00:10:01.261 "zcopy": true, 00:10:01.261 "get_zone_info": false, 00:10:01.261 "zone_management": false, 00:10:01.261 "zone_append": false, 00:10:01.261 "compare": false, 00:10:01.261 "compare_and_write": false, 00:10:01.261 "abort": true, 00:10:01.261 "seek_hole": false, 00:10:01.261 "seek_data": false, 00:10:01.261 "copy": true, 00:10:01.261 "nvme_iov_md": false 00:10:01.261 }, 00:10:01.261 "memory_domains": [ 00:10:01.261 { 00:10:01.261 "dma_device_id": "system", 00:10:01.261 "dma_device_type": 1 00:10:01.261 }, 00:10:01.261 { 00:10:01.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.261 "dma_device_type": 2 00:10:01.261 } 00:10:01.261 ], 00:10:01.261 "driver_specific": {} 00:10:01.261 } 00:10:01.261 ] 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.261 "name": "Existed_Raid", 00:10:01.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.261 "strip_size_kb": 64, 00:10:01.261 "state": "configuring", 00:10:01.261 "raid_level": "raid0", 00:10:01.261 "superblock": false, 00:10:01.261 "num_base_bdevs": 4, 00:10:01.261 "num_base_bdevs_discovered": 1, 00:10:01.261 "num_base_bdevs_operational": 4, 00:10:01.261 "base_bdevs_list": [ 00:10:01.261 { 00:10:01.261 "name": "BaseBdev1", 00:10:01.261 "uuid": "478b2381-029b-4aed-bcdf-d85cf10ac01f", 00:10:01.261 "is_configured": true, 00:10:01.261 "data_offset": 0, 00:10:01.261 "data_size": 65536 00:10:01.261 }, 00:10:01.261 { 00:10:01.261 "name": "BaseBdev2", 00:10:01.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.261 "is_configured": false, 00:10:01.261 "data_offset": 0, 00:10:01.261 "data_size": 0 00:10:01.261 }, 00:10:01.261 { 00:10:01.261 "name": "BaseBdev3", 00:10:01.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.261 "is_configured": false, 00:10:01.261 "data_offset": 0, 00:10:01.261 "data_size": 0 00:10:01.261 }, 00:10:01.261 { 00:10:01.261 "name": "BaseBdev4", 00:10:01.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.261 "is_configured": false, 00:10:01.261 "data_offset": 0, 00:10:01.261 "data_size": 0 00:10:01.261 } 00:10:01.261 ] 00:10:01.261 }' 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.261 23:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.829 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.829 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.829 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.830 [2024-12-06 23:44:13.098630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.830 [2024-12-06 23:44:13.098721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.830 [2024-12-06 23:44:13.110643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.830 [2024-12-06 23:44:13.112857] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.830 [2024-12-06 23:44:13.112976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.830 [2024-12-06 23:44:13.112993] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.830 [2024-12-06 23:44:13.113017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.830 [2024-12-06 23:44:13.113023] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:01.830 [2024-12-06 23:44:13.113032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.830 "name": "Existed_Raid", 00:10:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.830 "strip_size_kb": 64, 00:10:01.830 "state": "configuring", 00:10:01.830 "raid_level": "raid0", 00:10:01.830 "superblock": false, 00:10:01.830 "num_base_bdevs": 4, 00:10:01.830 "num_base_bdevs_discovered": 1, 00:10:01.830 "num_base_bdevs_operational": 4, 00:10:01.830 "base_bdevs_list": [ 00:10:01.830 { 00:10:01.830 "name": "BaseBdev1", 00:10:01.830 "uuid": "478b2381-029b-4aed-bcdf-d85cf10ac01f", 00:10:01.830 "is_configured": true, 00:10:01.830 "data_offset": 0, 00:10:01.830 "data_size": 65536 00:10:01.830 }, 00:10:01.830 { 00:10:01.830 "name": "BaseBdev2", 00:10:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.830 "is_configured": false, 00:10:01.830 "data_offset": 0, 00:10:01.830 "data_size": 0 00:10:01.830 }, 00:10:01.830 { 00:10:01.830 "name": "BaseBdev3", 00:10:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.830 "is_configured": false, 00:10:01.830 "data_offset": 0, 00:10:01.830 "data_size": 0 00:10:01.830 }, 00:10:01.830 { 00:10:01.830 "name": "BaseBdev4", 00:10:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.830 "is_configured": false, 00:10:01.830 "data_offset": 0, 00:10:01.830 "data_size": 0 00:10:01.830 } 00:10:01.830 ] 00:10:01.830 }' 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.830 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.090 [2024-12-06 23:44:13.606934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.090 BaseBdev2 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.090 [ 00:10:02.090 { 00:10:02.090 "name": "BaseBdev2", 00:10:02.090 "aliases": [ 00:10:02.090 "55d8a937-4c2f-4484-9120-dc603a957f30" 00:10:02.090 ], 00:10:02.090 "product_name": "Malloc disk", 00:10:02.090 "block_size": 512, 00:10:02.090 "num_blocks": 65536, 00:10:02.090 "uuid": "55d8a937-4c2f-4484-9120-dc603a957f30", 00:10:02.090 "assigned_rate_limits": { 00:10:02.090 "rw_ios_per_sec": 0, 00:10:02.090 "rw_mbytes_per_sec": 0, 00:10:02.090 "r_mbytes_per_sec": 0, 00:10:02.090 "w_mbytes_per_sec": 0 00:10:02.090 }, 00:10:02.090 "claimed": true, 00:10:02.090 "claim_type": "exclusive_write", 00:10:02.090 "zoned": false, 00:10:02.090 "supported_io_types": { 00:10:02.090 "read": true, 00:10:02.090 "write": true, 00:10:02.090 "unmap": true, 00:10:02.090 "flush": true, 00:10:02.090 "reset": true, 00:10:02.090 "nvme_admin": false, 00:10:02.090 "nvme_io": false, 00:10:02.090 "nvme_io_md": false, 00:10:02.090 "write_zeroes": true, 00:10:02.090 "zcopy": true, 00:10:02.090 "get_zone_info": false, 00:10:02.090 "zone_management": false, 00:10:02.090 "zone_append": false, 00:10:02.090 "compare": false, 00:10:02.090 "compare_and_write": false, 00:10:02.090 "abort": true, 00:10:02.090 "seek_hole": false, 00:10:02.090 "seek_data": false, 00:10:02.090 "copy": true, 00:10:02.090 "nvme_iov_md": false 00:10:02.090 }, 00:10:02.090 "memory_domains": [ 00:10:02.090 { 00:10:02.090 "dma_device_id": "system", 00:10:02.090 "dma_device_type": 1 00:10:02.090 }, 00:10:02.090 { 00:10:02.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.090 "dma_device_type": 2 00:10:02.090 } 00:10:02.090 ], 00:10:02.090 "driver_specific": {} 00:10:02.090 } 00:10:02.090 ] 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.090 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.350 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.350 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.350 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.350 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.350 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.350 "name": "Existed_Raid", 00:10:02.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.350 "strip_size_kb": 64, 00:10:02.350 "state": "configuring", 00:10:02.350 "raid_level": "raid0", 00:10:02.350 "superblock": false, 00:10:02.350 "num_base_bdevs": 4, 00:10:02.350 "num_base_bdevs_discovered": 2, 00:10:02.350 "num_base_bdevs_operational": 4, 00:10:02.350 "base_bdevs_list": [ 00:10:02.350 { 00:10:02.350 "name": "BaseBdev1", 00:10:02.350 "uuid": "478b2381-029b-4aed-bcdf-d85cf10ac01f", 00:10:02.350 "is_configured": true, 00:10:02.350 "data_offset": 0, 00:10:02.350 "data_size": 65536 00:10:02.350 }, 00:10:02.350 { 00:10:02.350 "name": "BaseBdev2", 00:10:02.350 "uuid": "55d8a937-4c2f-4484-9120-dc603a957f30", 00:10:02.350 "is_configured": true, 00:10:02.350 "data_offset": 0, 00:10:02.350 "data_size": 65536 00:10:02.350 }, 00:10:02.350 { 00:10:02.350 "name": "BaseBdev3", 00:10:02.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.350 "is_configured": false, 00:10:02.350 "data_offset": 0, 00:10:02.350 "data_size": 0 00:10:02.350 }, 00:10:02.351 { 00:10:02.351 "name": "BaseBdev4", 00:10:02.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.351 "is_configured": false, 00:10:02.351 "data_offset": 0, 00:10:02.351 "data_size": 0 00:10:02.351 } 00:10:02.351 ] 00:10:02.351 }' 00:10:02.351 23:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.351 23:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.611 [2024-12-06 23:44:14.126433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.611 BaseBdev3 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.611 [ 00:10:02.611 { 00:10:02.611 "name": "BaseBdev3", 00:10:02.611 "aliases": [ 00:10:02.611 "1049466a-eafd-4448-97fa-9919d11b7bd3" 00:10:02.611 ], 00:10:02.611 "product_name": "Malloc disk", 00:10:02.611 "block_size": 512, 00:10:02.611 "num_blocks": 65536, 00:10:02.611 "uuid": "1049466a-eafd-4448-97fa-9919d11b7bd3", 00:10:02.611 "assigned_rate_limits": { 00:10:02.611 "rw_ios_per_sec": 0, 00:10:02.611 "rw_mbytes_per_sec": 0, 00:10:02.611 "r_mbytes_per_sec": 0, 00:10:02.611 "w_mbytes_per_sec": 0 00:10:02.611 }, 00:10:02.611 "claimed": true, 00:10:02.611 "claim_type": "exclusive_write", 00:10:02.611 "zoned": false, 00:10:02.611 "supported_io_types": { 00:10:02.611 "read": true, 00:10:02.611 "write": true, 00:10:02.611 "unmap": true, 00:10:02.611 "flush": true, 00:10:02.611 "reset": true, 00:10:02.611 "nvme_admin": false, 00:10:02.611 "nvme_io": false, 00:10:02.611 "nvme_io_md": false, 00:10:02.611 "write_zeroes": true, 00:10:02.611 "zcopy": true, 00:10:02.611 "get_zone_info": false, 00:10:02.611 "zone_management": false, 00:10:02.611 "zone_append": false, 00:10:02.611 "compare": false, 00:10:02.611 "compare_and_write": false, 00:10:02.611 "abort": true, 00:10:02.611 "seek_hole": false, 00:10:02.611 "seek_data": false, 00:10:02.611 "copy": true, 00:10:02.611 "nvme_iov_md": false 00:10:02.611 }, 00:10:02.611 "memory_domains": [ 00:10:02.611 { 00:10:02.611 "dma_device_id": "system", 00:10:02.611 "dma_device_type": 1 00:10:02.611 }, 00:10:02.611 { 00:10:02.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.611 "dma_device_type": 2 00:10:02.611 } 00:10:02.611 ], 00:10:02.611 "driver_specific": {} 00:10:02.611 } 00:10:02.611 ] 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.611 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.871 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.871 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.871 "name": "Existed_Raid", 00:10:02.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.871 "strip_size_kb": 64, 00:10:02.871 "state": "configuring", 00:10:02.871 "raid_level": "raid0", 00:10:02.871 "superblock": false, 00:10:02.871 "num_base_bdevs": 4, 00:10:02.871 "num_base_bdevs_discovered": 3, 00:10:02.871 "num_base_bdevs_operational": 4, 00:10:02.871 "base_bdevs_list": [ 00:10:02.871 { 00:10:02.871 "name": "BaseBdev1", 00:10:02.871 "uuid": "478b2381-029b-4aed-bcdf-d85cf10ac01f", 00:10:02.871 "is_configured": true, 00:10:02.871 "data_offset": 0, 00:10:02.871 "data_size": 65536 00:10:02.871 }, 00:10:02.871 { 00:10:02.871 "name": "BaseBdev2", 00:10:02.871 "uuid": "55d8a937-4c2f-4484-9120-dc603a957f30", 00:10:02.871 "is_configured": true, 00:10:02.871 "data_offset": 0, 00:10:02.871 "data_size": 65536 00:10:02.871 }, 00:10:02.871 { 00:10:02.871 "name": "BaseBdev3", 00:10:02.871 "uuid": "1049466a-eafd-4448-97fa-9919d11b7bd3", 00:10:02.871 "is_configured": true, 00:10:02.871 "data_offset": 0, 00:10:02.871 "data_size": 65536 00:10:02.871 }, 00:10:02.871 { 00:10:02.871 "name": "BaseBdev4", 00:10:02.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.871 "is_configured": false, 00:10:02.871 "data_offset": 0, 00:10:02.871 "data_size": 0 00:10:02.871 } 00:10:02.871 ] 00:10:02.871 }' 00:10:02.871 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.871 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.131 [2024-12-06 23:44:14.585645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:03.131 [2024-12-06 23:44:14.585803] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:03.131 [2024-12-06 23:44:14.585838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:03.131 [2024-12-06 23:44:14.586197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:03.131 [2024-12-06 23:44:14.586455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:03.131 [2024-12-06 23:44:14.586500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:03.131 [2024-12-06 23:44:14.586863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.131 BaseBdev4 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.131 [ 00:10:03.131 { 00:10:03.131 "name": "BaseBdev4", 00:10:03.131 "aliases": [ 00:10:03.131 "fc9a66f8-6cc6-414a-8aea-69801b103926" 00:10:03.131 ], 00:10:03.131 "product_name": "Malloc disk", 00:10:03.131 "block_size": 512, 00:10:03.131 "num_blocks": 65536, 00:10:03.131 "uuid": "fc9a66f8-6cc6-414a-8aea-69801b103926", 00:10:03.131 "assigned_rate_limits": { 00:10:03.131 "rw_ios_per_sec": 0, 00:10:03.131 "rw_mbytes_per_sec": 0, 00:10:03.131 "r_mbytes_per_sec": 0, 00:10:03.131 "w_mbytes_per_sec": 0 00:10:03.131 }, 00:10:03.131 "claimed": true, 00:10:03.131 "claim_type": "exclusive_write", 00:10:03.131 "zoned": false, 00:10:03.131 "supported_io_types": { 00:10:03.131 "read": true, 00:10:03.131 "write": true, 00:10:03.131 "unmap": true, 00:10:03.131 "flush": true, 00:10:03.131 "reset": true, 00:10:03.131 "nvme_admin": false, 00:10:03.131 "nvme_io": false, 00:10:03.131 "nvme_io_md": false, 00:10:03.131 "write_zeroes": true, 00:10:03.131 "zcopy": true, 00:10:03.131 "get_zone_info": false, 00:10:03.131 "zone_management": false, 00:10:03.131 "zone_append": false, 00:10:03.131 "compare": false, 00:10:03.131 "compare_and_write": false, 00:10:03.131 "abort": true, 00:10:03.131 "seek_hole": false, 00:10:03.131 "seek_data": false, 00:10:03.131 "copy": true, 00:10:03.131 "nvme_iov_md": false 00:10:03.131 }, 00:10:03.131 "memory_domains": [ 00:10:03.131 { 00:10:03.131 "dma_device_id": "system", 00:10:03.131 "dma_device_type": 1 00:10:03.131 }, 00:10:03.131 { 00:10:03.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.131 "dma_device_type": 2 00:10:03.131 } 00:10:03.131 ], 00:10:03.131 "driver_specific": {} 00:10:03.131 } 00:10:03.131 ] 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.131 "name": "Existed_Raid", 00:10:03.131 "uuid": "f665c809-4382-4291-b29e-a50a6ad3aee3", 00:10:03.131 "strip_size_kb": 64, 00:10:03.131 "state": "online", 00:10:03.131 "raid_level": "raid0", 00:10:03.131 "superblock": false, 00:10:03.131 "num_base_bdevs": 4, 00:10:03.131 "num_base_bdevs_discovered": 4, 00:10:03.131 "num_base_bdevs_operational": 4, 00:10:03.131 "base_bdevs_list": [ 00:10:03.131 { 00:10:03.131 "name": "BaseBdev1", 00:10:03.131 "uuid": "478b2381-029b-4aed-bcdf-d85cf10ac01f", 00:10:03.131 "is_configured": true, 00:10:03.131 "data_offset": 0, 00:10:03.131 "data_size": 65536 00:10:03.131 }, 00:10:03.131 { 00:10:03.131 "name": "BaseBdev2", 00:10:03.131 "uuid": "55d8a937-4c2f-4484-9120-dc603a957f30", 00:10:03.131 "is_configured": true, 00:10:03.131 "data_offset": 0, 00:10:03.131 "data_size": 65536 00:10:03.131 }, 00:10:03.131 { 00:10:03.131 "name": "BaseBdev3", 00:10:03.131 "uuid": "1049466a-eafd-4448-97fa-9919d11b7bd3", 00:10:03.131 "is_configured": true, 00:10:03.131 "data_offset": 0, 00:10:03.131 "data_size": 65536 00:10:03.131 }, 00:10:03.131 { 00:10:03.131 "name": "BaseBdev4", 00:10:03.131 "uuid": "fc9a66f8-6cc6-414a-8aea-69801b103926", 00:10:03.131 "is_configured": true, 00:10:03.131 "data_offset": 0, 00:10:03.131 "data_size": 65536 00:10:03.131 } 00:10:03.131 ] 00:10:03.131 }' 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.131 23:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.706 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:03.706 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:03.707 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.707 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.707 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.707 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.707 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:03.707 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.707 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.707 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.707 [2024-12-06 23:44:15.089195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.707 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.707 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.707 "name": "Existed_Raid", 00:10:03.707 "aliases": [ 00:10:03.707 "f665c809-4382-4291-b29e-a50a6ad3aee3" 00:10:03.707 ], 00:10:03.707 "product_name": "Raid Volume", 00:10:03.707 "block_size": 512, 00:10:03.707 "num_blocks": 262144, 00:10:03.707 "uuid": "f665c809-4382-4291-b29e-a50a6ad3aee3", 00:10:03.707 "assigned_rate_limits": { 00:10:03.707 "rw_ios_per_sec": 0, 00:10:03.707 "rw_mbytes_per_sec": 0, 00:10:03.707 "r_mbytes_per_sec": 0, 00:10:03.707 "w_mbytes_per_sec": 0 00:10:03.707 }, 00:10:03.707 "claimed": false, 00:10:03.707 "zoned": false, 00:10:03.707 "supported_io_types": { 00:10:03.707 "read": true, 00:10:03.707 "write": true, 00:10:03.707 "unmap": true, 00:10:03.707 "flush": true, 00:10:03.707 "reset": true, 00:10:03.707 "nvme_admin": false, 00:10:03.707 "nvme_io": false, 00:10:03.707 "nvme_io_md": false, 00:10:03.707 "write_zeroes": true, 00:10:03.707 "zcopy": false, 00:10:03.707 "get_zone_info": false, 00:10:03.707 "zone_management": false, 00:10:03.707 "zone_append": false, 00:10:03.707 "compare": false, 00:10:03.707 "compare_and_write": false, 00:10:03.707 "abort": false, 00:10:03.707 "seek_hole": false, 00:10:03.707 "seek_data": false, 00:10:03.707 "copy": false, 00:10:03.707 "nvme_iov_md": false 00:10:03.707 }, 00:10:03.707 "memory_domains": [ 00:10:03.707 { 00:10:03.707 "dma_device_id": "system", 00:10:03.707 "dma_device_type": 1 00:10:03.707 }, 00:10:03.707 { 00:10:03.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.708 "dma_device_type": 2 00:10:03.708 }, 00:10:03.708 { 00:10:03.708 "dma_device_id": "system", 00:10:03.708 "dma_device_type": 1 00:10:03.708 }, 00:10:03.708 { 00:10:03.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.708 "dma_device_type": 2 00:10:03.708 }, 00:10:03.708 { 00:10:03.708 "dma_device_id": "system", 00:10:03.708 "dma_device_type": 1 00:10:03.708 }, 00:10:03.708 { 00:10:03.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.708 "dma_device_type": 2 00:10:03.708 }, 00:10:03.708 { 00:10:03.708 "dma_device_id": "system", 00:10:03.708 "dma_device_type": 1 00:10:03.708 }, 00:10:03.708 { 00:10:03.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.708 "dma_device_type": 2 00:10:03.708 } 00:10:03.708 ], 00:10:03.708 "driver_specific": { 00:10:03.708 "raid": { 00:10:03.708 "uuid": "f665c809-4382-4291-b29e-a50a6ad3aee3", 00:10:03.708 "strip_size_kb": 64, 00:10:03.708 "state": "online", 00:10:03.708 "raid_level": "raid0", 00:10:03.708 "superblock": false, 00:10:03.708 "num_base_bdevs": 4, 00:10:03.708 "num_base_bdevs_discovered": 4, 00:10:03.708 "num_base_bdevs_operational": 4, 00:10:03.708 "base_bdevs_list": [ 00:10:03.708 { 00:10:03.708 "name": "BaseBdev1", 00:10:03.708 "uuid": "478b2381-029b-4aed-bcdf-d85cf10ac01f", 00:10:03.708 "is_configured": true, 00:10:03.708 "data_offset": 0, 00:10:03.709 "data_size": 65536 00:10:03.709 }, 00:10:03.709 { 00:10:03.709 "name": "BaseBdev2", 00:10:03.709 "uuid": "55d8a937-4c2f-4484-9120-dc603a957f30", 00:10:03.709 "is_configured": true, 00:10:03.709 "data_offset": 0, 00:10:03.709 "data_size": 65536 00:10:03.709 }, 00:10:03.709 { 00:10:03.709 "name": "BaseBdev3", 00:10:03.709 "uuid": "1049466a-eafd-4448-97fa-9919d11b7bd3", 00:10:03.709 "is_configured": true, 00:10:03.709 "data_offset": 0, 00:10:03.709 "data_size": 65536 00:10:03.709 }, 00:10:03.709 { 00:10:03.709 "name": "BaseBdev4", 00:10:03.709 "uuid": "fc9a66f8-6cc6-414a-8aea-69801b103926", 00:10:03.709 "is_configured": true, 00:10:03.709 "data_offset": 0, 00:10:03.709 "data_size": 65536 00:10:03.709 } 00:10:03.709 ] 00:10:03.709 } 00:10:03.709 } 00:10:03.709 }' 00:10:03.709 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.709 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:03.709 BaseBdev2 00:10:03.709 BaseBdev3 00:10:03.709 BaseBdev4' 00:10:03.709 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.710 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.710 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.710 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:03.710 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.710 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.710 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.710 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.974 [2024-12-06 23:44:15.424329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.974 [2024-12-06 23:44:15.424440] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.974 [2024-12-06 23:44:15.424515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.974 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.233 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.233 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.233 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.233 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.233 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.233 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.233 "name": "Existed_Raid", 00:10:04.233 "uuid": "f665c809-4382-4291-b29e-a50a6ad3aee3", 00:10:04.233 "strip_size_kb": 64, 00:10:04.233 "state": "offline", 00:10:04.233 "raid_level": "raid0", 00:10:04.233 "superblock": false, 00:10:04.233 "num_base_bdevs": 4, 00:10:04.233 "num_base_bdevs_discovered": 3, 00:10:04.233 "num_base_bdevs_operational": 3, 00:10:04.233 "base_bdevs_list": [ 00:10:04.233 { 00:10:04.233 "name": null, 00:10:04.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.233 "is_configured": false, 00:10:04.233 "data_offset": 0, 00:10:04.233 "data_size": 65536 00:10:04.233 }, 00:10:04.234 { 00:10:04.234 "name": "BaseBdev2", 00:10:04.234 "uuid": "55d8a937-4c2f-4484-9120-dc603a957f30", 00:10:04.234 "is_configured": true, 00:10:04.234 "data_offset": 0, 00:10:04.234 "data_size": 65536 00:10:04.234 }, 00:10:04.234 { 00:10:04.234 "name": "BaseBdev3", 00:10:04.234 "uuid": "1049466a-eafd-4448-97fa-9919d11b7bd3", 00:10:04.234 "is_configured": true, 00:10:04.234 "data_offset": 0, 00:10:04.234 "data_size": 65536 00:10:04.234 }, 00:10:04.234 { 00:10:04.234 "name": "BaseBdev4", 00:10:04.234 "uuid": "fc9a66f8-6cc6-414a-8aea-69801b103926", 00:10:04.234 "is_configured": true, 00:10:04.234 "data_offset": 0, 00:10:04.234 "data_size": 65536 00:10:04.234 } 00:10:04.234 ] 00:10:04.234 }' 00:10:04.234 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.234 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.493 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:04.493 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.493 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.493 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.493 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.493 23:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:04.493 23:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.493 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:04.493 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:04.493 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:04.493 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.493 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.493 [2024-12-06 23:44:16.022438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.752 [2024-12-06 23:44:16.184366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.752 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.011 [2024-12-06 23:44:16.350187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:05.011 [2024-12-06 23:44:16.350256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.011 BaseBdev2 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.011 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.271 [ 00:10:05.271 { 00:10:05.271 "name": "BaseBdev2", 00:10:05.271 "aliases": [ 00:10:05.271 "1ff41a82-759c-4236-b96f-ac6bcc92d66f" 00:10:05.271 ], 00:10:05.271 "product_name": "Malloc disk", 00:10:05.271 "block_size": 512, 00:10:05.271 "num_blocks": 65536, 00:10:05.271 "uuid": "1ff41a82-759c-4236-b96f-ac6bcc92d66f", 00:10:05.271 "assigned_rate_limits": { 00:10:05.271 "rw_ios_per_sec": 0, 00:10:05.271 "rw_mbytes_per_sec": 0, 00:10:05.271 "r_mbytes_per_sec": 0, 00:10:05.271 "w_mbytes_per_sec": 0 00:10:05.271 }, 00:10:05.271 "claimed": false, 00:10:05.271 "zoned": false, 00:10:05.271 "supported_io_types": { 00:10:05.271 "read": true, 00:10:05.271 "write": true, 00:10:05.271 "unmap": true, 00:10:05.271 "flush": true, 00:10:05.271 "reset": true, 00:10:05.271 "nvme_admin": false, 00:10:05.271 "nvme_io": false, 00:10:05.271 "nvme_io_md": false, 00:10:05.271 "write_zeroes": true, 00:10:05.271 "zcopy": true, 00:10:05.271 "get_zone_info": false, 00:10:05.271 "zone_management": false, 00:10:05.271 "zone_append": false, 00:10:05.271 "compare": false, 00:10:05.271 "compare_and_write": false, 00:10:05.271 "abort": true, 00:10:05.271 "seek_hole": false, 00:10:05.271 "seek_data": false, 00:10:05.271 "copy": true, 00:10:05.271 "nvme_iov_md": false 00:10:05.271 }, 00:10:05.271 "memory_domains": [ 00:10:05.271 { 00:10:05.271 "dma_device_id": "system", 00:10:05.271 "dma_device_type": 1 00:10:05.271 }, 00:10:05.271 { 00:10:05.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.271 "dma_device_type": 2 00:10:05.271 } 00:10:05.271 ], 00:10:05.271 "driver_specific": {} 00:10:05.271 } 00:10:05.271 ] 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.271 BaseBdev3 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.271 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.272 [ 00:10:05.272 { 00:10:05.272 "name": "BaseBdev3", 00:10:05.272 "aliases": [ 00:10:05.272 "96f3d274-0994-475c-b4fd-d74d4b8c753a" 00:10:05.272 ], 00:10:05.272 "product_name": "Malloc disk", 00:10:05.272 "block_size": 512, 00:10:05.272 "num_blocks": 65536, 00:10:05.272 "uuid": "96f3d274-0994-475c-b4fd-d74d4b8c753a", 00:10:05.272 "assigned_rate_limits": { 00:10:05.272 "rw_ios_per_sec": 0, 00:10:05.272 "rw_mbytes_per_sec": 0, 00:10:05.272 "r_mbytes_per_sec": 0, 00:10:05.272 "w_mbytes_per_sec": 0 00:10:05.272 }, 00:10:05.272 "claimed": false, 00:10:05.272 "zoned": false, 00:10:05.272 "supported_io_types": { 00:10:05.272 "read": true, 00:10:05.272 "write": true, 00:10:05.272 "unmap": true, 00:10:05.272 "flush": true, 00:10:05.272 "reset": true, 00:10:05.272 "nvme_admin": false, 00:10:05.272 "nvme_io": false, 00:10:05.272 "nvme_io_md": false, 00:10:05.272 "write_zeroes": true, 00:10:05.272 "zcopy": true, 00:10:05.272 "get_zone_info": false, 00:10:05.272 "zone_management": false, 00:10:05.272 "zone_append": false, 00:10:05.272 "compare": false, 00:10:05.272 "compare_and_write": false, 00:10:05.272 "abort": true, 00:10:05.272 "seek_hole": false, 00:10:05.272 "seek_data": false, 00:10:05.272 "copy": true, 00:10:05.272 "nvme_iov_md": false 00:10:05.272 }, 00:10:05.272 "memory_domains": [ 00:10:05.272 { 00:10:05.272 "dma_device_id": "system", 00:10:05.272 "dma_device_type": 1 00:10:05.272 }, 00:10:05.272 { 00:10:05.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.272 "dma_device_type": 2 00:10:05.272 } 00:10:05.272 ], 00:10:05.272 "driver_specific": {} 00:10:05.272 } 00:10:05.272 ] 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.272 BaseBdev4 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.272 [ 00:10:05.272 { 00:10:05.272 "name": "BaseBdev4", 00:10:05.272 "aliases": [ 00:10:05.272 "562f4d9b-4b22-47b4-8194-da1a24b5e0e2" 00:10:05.272 ], 00:10:05.272 "product_name": "Malloc disk", 00:10:05.272 "block_size": 512, 00:10:05.272 "num_blocks": 65536, 00:10:05.272 "uuid": "562f4d9b-4b22-47b4-8194-da1a24b5e0e2", 00:10:05.272 "assigned_rate_limits": { 00:10:05.272 "rw_ios_per_sec": 0, 00:10:05.272 "rw_mbytes_per_sec": 0, 00:10:05.272 "r_mbytes_per_sec": 0, 00:10:05.272 "w_mbytes_per_sec": 0 00:10:05.272 }, 00:10:05.272 "claimed": false, 00:10:05.272 "zoned": false, 00:10:05.272 "supported_io_types": { 00:10:05.272 "read": true, 00:10:05.272 "write": true, 00:10:05.272 "unmap": true, 00:10:05.272 "flush": true, 00:10:05.272 "reset": true, 00:10:05.272 "nvme_admin": false, 00:10:05.272 "nvme_io": false, 00:10:05.272 "nvme_io_md": false, 00:10:05.272 "write_zeroes": true, 00:10:05.272 "zcopy": true, 00:10:05.272 "get_zone_info": false, 00:10:05.272 "zone_management": false, 00:10:05.272 "zone_append": false, 00:10:05.272 "compare": false, 00:10:05.272 "compare_and_write": false, 00:10:05.272 "abort": true, 00:10:05.272 "seek_hole": false, 00:10:05.272 "seek_data": false, 00:10:05.272 "copy": true, 00:10:05.272 "nvme_iov_md": false 00:10:05.272 }, 00:10:05.272 "memory_domains": [ 00:10:05.272 { 00:10:05.272 "dma_device_id": "system", 00:10:05.272 "dma_device_type": 1 00:10:05.272 }, 00:10:05.272 { 00:10:05.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.272 "dma_device_type": 2 00:10:05.272 } 00:10:05.272 ], 00:10:05.272 "driver_specific": {} 00:10:05.272 } 00:10:05.272 ] 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.272 [2024-12-06 23:44:16.769189] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.272 [2024-12-06 23:44:16.769371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.272 [2024-12-06 23:44:16.769411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.272 [2024-12-06 23:44:16.771671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.272 [2024-12-06 23:44:16.771745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.272 "name": "Existed_Raid", 00:10:05.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.272 "strip_size_kb": 64, 00:10:05.272 "state": "configuring", 00:10:05.272 "raid_level": "raid0", 00:10:05.272 "superblock": false, 00:10:05.272 "num_base_bdevs": 4, 00:10:05.272 "num_base_bdevs_discovered": 3, 00:10:05.272 "num_base_bdevs_operational": 4, 00:10:05.272 "base_bdevs_list": [ 00:10:05.272 { 00:10:05.272 "name": "BaseBdev1", 00:10:05.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.272 "is_configured": false, 00:10:05.272 "data_offset": 0, 00:10:05.272 "data_size": 0 00:10:05.272 }, 00:10:05.272 { 00:10:05.272 "name": "BaseBdev2", 00:10:05.272 "uuid": "1ff41a82-759c-4236-b96f-ac6bcc92d66f", 00:10:05.272 "is_configured": true, 00:10:05.272 "data_offset": 0, 00:10:05.272 "data_size": 65536 00:10:05.272 }, 00:10:05.272 { 00:10:05.272 "name": "BaseBdev3", 00:10:05.272 "uuid": "96f3d274-0994-475c-b4fd-d74d4b8c753a", 00:10:05.272 "is_configured": true, 00:10:05.272 "data_offset": 0, 00:10:05.272 "data_size": 65536 00:10:05.272 }, 00:10:05.272 { 00:10:05.272 "name": "BaseBdev4", 00:10:05.272 "uuid": "562f4d9b-4b22-47b4-8194-da1a24b5e0e2", 00:10:05.272 "is_configured": true, 00:10:05.272 "data_offset": 0, 00:10:05.272 "data_size": 65536 00:10:05.272 } 00:10:05.272 ] 00:10:05.272 }' 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.272 23:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.841 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:05.841 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.841 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.841 [2024-12-06 23:44:17.240432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.842 "name": "Existed_Raid", 00:10:05.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.842 "strip_size_kb": 64, 00:10:05.842 "state": "configuring", 00:10:05.842 "raid_level": "raid0", 00:10:05.842 "superblock": false, 00:10:05.842 "num_base_bdevs": 4, 00:10:05.842 "num_base_bdevs_discovered": 2, 00:10:05.842 "num_base_bdevs_operational": 4, 00:10:05.842 "base_bdevs_list": [ 00:10:05.842 { 00:10:05.842 "name": "BaseBdev1", 00:10:05.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.842 "is_configured": false, 00:10:05.842 "data_offset": 0, 00:10:05.842 "data_size": 0 00:10:05.842 }, 00:10:05.842 { 00:10:05.842 "name": null, 00:10:05.842 "uuid": "1ff41a82-759c-4236-b96f-ac6bcc92d66f", 00:10:05.842 "is_configured": false, 00:10:05.842 "data_offset": 0, 00:10:05.842 "data_size": 65536 00:10:05.842 }, 00:10:05.842 { 00:10:05.842 "name": "BaseBdev3", 00:10:05.842 "uuid": "96f3d274-0994-475c-b4fd-d74d4b8c753a", 00:10:05.842 "is_configured": true, 00:10:05.842 "data_offset": 0, 00:10:05.842 "data_size": 65536 00:10:05.842 }, 00:10:05.842 { 00:10:05.842 "name": "BaseBdev4", 00:10:05.842 "uuid": "562f4d9b-4b22-47b4-8194-da1a24b5e0e2", 00:10:05.842 "is_configured": true, 00:10:05.842 "data_offset": 0, 00:10:05.842 "data_size": 65536 00:10:05.842 } 00:10:05.842 ] 00:10:05.842 }' 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.842 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.102 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.102 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.102 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.102 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.362 [2024-12-06 23:44:17.750264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.362 BaseBdev1 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.362 [ 00:10:06.362 { 00:10:06.362 "name": "BaseBdev1", 00:10:06.362 "aliases": [ 00:10:06.362 "cc6653eb-7936-4484-b828-de18ce9b57e5" 00:10:06.362 ], 00:10:06.362 "product_name": "Malloc disk", 00:10:06.362 "block_size": 512, 00:10:06.362 "num_blocks": 65536, 00:10:06.362 "uuid": "cc6653eb-7936-4484-b828-de18ce9b57e5", 00:10:06.362 "assigned_rate_limits": { 00:10:06.362 "rw_ios_per_sec": 0, 00:10:06.362 "rw_mbytes_per_sec": 0, 00:10:06.362 "r_mbytes_per_sec": 0, 00:10:06.362 "w_mbytes_per_sec": 0 00:10:06.362 }, 00:10:06.362 "claimed": true, 00:10:06.362 "claim_type": "exclusive_write", 00:10:06.362 "zoned": false, 00:10:06.362 "supported_io_types": { 00:10:06.362 "read": true, 00:10:06.362 "write": true, 00:10:06.362 "unmap": true, 00:10:06.362 "flush": true, 00:10:06.362 "reset": true, 00:10:06.362 "nvme_admin": false, 00:10:06.362 "nvme_io": false, 00:10:06.362 "nvme_io_md": false, 00:10:06.362 "write_zeroes": true, 00:10:06.362 "zcopy": true, 00:10:06.362 "get_zone_info": false, 00:10:06.362 "zone_management": false, 00:10:06.362 "zone_append": false, 00:10:06.362 "compare": false, 00:10:06.362 "compare_and_write": false, 00:10:06.362 "abort": true, 00:10:06.362 "seek_hole": false, 00:10:06.362 "seek_data": false, 00:10:06.362 "copy": true, 00:10:06.362 "nvme_iov_md": false 00:10:06.362 }, 00:10:06.362 "memory_domains": [ 00:10:06.362 { 00:10:06.362 "dma_device_id": "system", 00:10:06.362 "dma_device_type": 1 00:10:06.362 }, 00:10:06.362 { 00:10:06.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.362 "dma_device_type": 2 00:10:06.362 } 00:10:06.362 ], 00:10:06.362 "driver_specific": {} 00:10:06.362 } 00:10:06.362 ] 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.362 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.362 "name": "Existed_Raid", 00:10:06.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.362 "strip_size_kb": 64, 00:10:06.362 "state": "configuring", 00:10:06.362 "raid_level": "raid0", 00:10:06.362 "superblock": false, 00:10:06.362 "num_base_bdevs": 4, 00:10:06.362 "num_base_bdevs_discovered": 3, 00:10:06.363 "num_base_bdevs_operational": 4, 00:10:06.363 "base_bdevs_list": [ 00:10:06.363 { 00:10:06.363 "name": "BaseBdev1", 00:10:06.363 "uuid": "cc6653eb-7936-4484-b828-de18ce9b57e5", 00:10:06.363 "is_configured": true, 00:10:06.363 "data_offset": 0, 00:10:06.363 "data_size": 65536 00:10:06.363 }, 00:10:06.363 { 00:10:06.363 "name": null, 00:10:06.363 "uuid": "1ff41a82-759c-4236-b96f-ac6bcc92d66f", 00:10:06.363 "is_configured": false, 00:10:06.363 "data_offset": 0, 00:10:06.363 "data_size": 65536 00:10:06.363 }, 00:10:06.363 { 00:10:06.363 "name": "BaseBdev3", 00:10:06.363 "uuid": "96f3d274-0994-475c-b4fd-d74d4b8c753a", 00:10:06.363 "is_configured": true, 00:10:06.363 "data_offset": 0, 00:10:06.363 "data_size": 65536 00:10:06.363 }, 00:10:06.363 { 00:10:06.363 "name": "BaseBdev4", 00:10:06.363 "uuid": "562f4d9b-4b22-47b4-8194-da1a24b5e0e2", 00:10:06.363 "is_configured": true, 00:10:06.363 "data_offset": 0, 00:10:06.363 "data_size": 65536 00:10:06.363 } 00:10:06.363 ] 00:10:06.363 }' 00:10:06.363 23:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.363 23:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.931 [2024-12-06 23:44:18.309454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.931 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.931 "name": "Existed_Raid", 00:10:06.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.931 "strip_size_kb": 64, 00:10:06.931 "state": "configuring", 00:10:06.931 "raid_level": "raid0", 00:10:06.931 "superblock": false, 00:10:06.931 "num_base_bdevs": 4, 00:10:06.931 "num_base_bdevs_discovered": 2, 00:10:06.931 "num_base_bdevs_operational": 4, 00:10:06.932 "base_bdevs_list": [ 00:10:06.932 { 00:10:06.932 "name": "BaseBdev1", 00:10:06.932 "uuid": "cc6653eb-7936-4484-b828-de18ce9b57e5", 00:10:06.932 "is_configured": true, 00:10:06.932 "data_offset": 0, 00:10:06.932 "data_size": 65536 00:10:06.932 }, 00:10:06.932 { 00:10:06.932 "name": null, 00:10:06.932 "uuid": "1ff41a82-759c-4236-b96f-ac6bcc92d66f", 00:10:06.932 "is_configured": false, 00:10:06.932 "data_offset": 0, 00:10:06.932 "data_size": 65536 00:10:06.932 }, 00:10:06.932 { 00:10:06.932 "name": null, 00:10:06.932 "uuid": "96f3d274-0994-475c-b4fd-d74d4b8c753a", 00:10:06.932 "is_configured": false, 00:10:06.932 "data_offset": 0, 00:10:06.932 "data_size": 65536 00:10:06.932 }, 00:10:06.932 { 00:10:06.932 "name": "BaseBdev4", 00:10:06.932 "uuid": "562f4d9b-4b22-47b4-8194-da1a24b5e0e2", 00:10:06.932 "is_configured": true, 00:10:06.932 "data_offset": 0, 00:10:06.932 "data_size": 65536 00:10:06.932 } 00:10:06.932 ] 00:10:06.932 }' 00:10:06.932 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.932 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.191 [2024-12-06 23:44:18.732683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.191 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.451 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.451 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.451 "name": "Existed_Raid", 00:10:07.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.451 "strip_size_kb": 64, 00:10:07.451 "state": "configuring", 00:10:07.451 "raid_level": "raid0", 00:10:07.451 "superblock": false, 00:10:07.451 "num_base_bdevs": 4, 00:10:07.451 "num_base_bdevs_discovered": 3, 00:10:07.451 "num_base_bdevs_operational": 4, 00:10:07.451 "base_bdevs_list": [ 00:10:07.451 { 00:10:07.451 "name": "BaseBdev1", 00:10:07.451 "uuid": "cc6653eb-7936-4484-b828-de18ce9b57e5", 00:10:07.451 "is_configured": true, 00:10:07.451 "data_offset": 0, 00:10:07.451 "data_size": 65536 00:10:07.451 }, 00:10:07.451 { 00:10:07.451 "name": null, 00:10:07.451 "uuid": "1ff41a82-759c-4236-b96f-ac6bcc92d66f", 00:10:07.451 "is_configured": false, 00:10:07.451 "data_offset": 0, 00:10:07.451 "data_size": 65536 00:10:07.451 }, 00:10:07.451 { 00:10:07.451 "name": "BaseBdev3", 00:10:07.451 "uuid": "96f3d274-0994-475c-b4fd-d74d4b8c753a", 00:10:07.451 "is_configured": true, 00:10:07.451 "data_offset": 0, 00:10:07.451 "data_size": 65536 00:10:07.451 }, 00:10:07.451 { 00:10:07.451 "name": "BaseBdev4", 00:10:07.451 "uuid": "562f4d9b-4b22-47b4-8194-da1a24b5e0e2", 00:10:07.451 "is_configured": true, 00:10:07.451 "data_offset": 0, 00:10:07.451 "data_size": 65536 00:10:07.451 } 00:10:07.451 ] 00:10:07.451 }' 00:10:07.451 23:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.451 23:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.711 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.711 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.711 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:07.711 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.711 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.711 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:07.711 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.711 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.711 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.711 [2024-12-06 23:44:19.255939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.970 "name": "Existed_Raid", 00:10:07.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.970 "strip_size_kb": 64, 00:10:07.970 "state": "configuring", 00:10:07.970 "raid_level": "raid0", 00:10:07.970 "superblock": false, 00:10:07.970 "num_base_bdevs": 4, 00:10:07.970 "num_base_bdevs_discovered": 2, 00:10:07.970 "num_base_bdevs_operational": 4, 00:10:07.970 "base_bdevs_list": [ 00:10:07.970 { 00:10:07.970 "name": null, 00:10:07.970 "uuid": "cc6653eb-7936-4484-b828-de18ce9b57e5", 00:10:07.970 "is_configured": false, 00:10:07.970 "data_offset": 0, 00:10:07.970 "data_size": 65536 00:10:07.970 }, 00:10:07.970 { 00:10:07.970 "name": null, 00:10:07.970 "uuid": "1ff41a82-759c-4236-b96f-ac6bcc92d66f", 00:10:07.970 "is_configured": false, 00:10:07.970 "data_offset": 0, 00:10:07.970 "data_size": 65536 00:10:07.970 }, 00:10:07.970 { 00:10:07.970 "name": "BaseBdev3", 00:10:07.970 "uuid": "96f3d274-0994-475c-b4fd-d74d4b8c753a", 00:10:07.970 "is_configured": true, 00:10:07.970 "data_offset": 0, 00:10:07.970 "data_size": 65536 00:10:07.970 }, 00:10:07.970 { 00:10:07.970 "name": "BaseBdev4", 00:10:07.970 "uuid": "562f4d9b-4b22-47b4-8194-da1a24b5e0e2", 00:10:07.970 "is_configured": true, 00:10:07.970 "data_offset": 0, 00:10:07.970 "data_size": 65536 00:10:07.970 } 00:10:07.970 ] 00:10:07.970 }' 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.970 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.538 [2024-12-06 23:44:19.885008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.538 "name": "Existed_Raid", 00:10:08.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.538 "strip_size_kb": 64, 00:10:08.538 "state": "configuring", 00:10:08.538 "raid_level": "raid0", 00:10:08.538 "superblock": false, 00:10:08.538 "num_base_bdevs": 4, 00:10:08.538 "num_base_bdevs_discovered": 3, 00:10:08.538 "num_base_bdevs_operational": 4, 00:10:08.538 "base_bdevs_list": [ 00:10:08.538 { 00:10:08.538 "name": null, 00:10:08.538 "uuid": "cc6653eb-7936-4484-b828-de18ce9b57e5", 00:10:08.538 "is_configured": false, 00:10:08.538 "data_offset": 0, 00:10:08.538 "data_size": 65536 00:10:08.538 }, 00:10:08.538 { 00:10:08.538 "name": "BaseBdev2", 00:10:08.538 "uuid": "1ff41a82-759c-4236-b96f-ac6bcc92d66f", 00:10:08.538 "is_configured": true, 00:10:08.538 "data_offset": 0, 00:10:08.538 "data_size": 65536 00:10:08.538 }, 00:10:08.538 { 00:10:08.538 "name": "BaseBdev3", 00:10:08.538 "uuid": "96f3d274-0994-475c-b4fd-d74d4b8c753a", 00:10:08.538 "is_configured": true, 00:10:08.538 "data_offset": 0, 00:10:08.538 "data_size": 65536 00:10:08.538 }, 00:10:08.538 { 00:10:08.538 "name": "BaseBdev4", 00:10:08.538 "uuid": "562f4d9b-4b22-47b4-8194-da1a24b5e0e2", 00:10:08.538 "is_configured": true, 00:10:08.538 "data_offset": 0, 00:10:08.538 "data_size": 65536 00:10:08.538 } 00:10:08.538 ] 00:10:08.538 }' 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.538 23:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.797 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:08.797 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.797 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.797 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.797 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.797 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:08.797 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.797 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.797 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.797 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:09.056 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.056 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cc6653eb-7936-4484-b828-de18ce9b57e5 00:10:09.056 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.056 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.056 [2024-12-06 23:44:20.447127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:09.056 [2024-12-06 23:44:20.447287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:09.056 [2024-12-06 23:44:20.447329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:09.056 [2024-12-06 23:44:20.447680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:09.056 [2024-12-06 23:44:20.447894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:09.056 [2024-12-06 23:44:20.447954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:09.056 [2024-12-06 23:44:20.448308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.057 NewBaseBdev 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.057 [ 00:10:09.057 { 00:10:09.057 "name": "NewBaseBdev", 00:10:09.057 "aliases": [ 00:10:09.057 "cc6653eb-7936-4484-b828-de18ce9b57e5" 00:10:09.057 ], 00:10:09.057 "product_name": "Malloc disk", 00:10:09.057 "block_size": 512, 00:10:09.057 "num_blocks": 65536, 00:10:09.057 "uuid": "cc6653eb-7936-4484-b828-de18ce9b57e5", 00:10:09.057 "assigned_rate_limits": { 00:10:09.057 "rw_ios_per_sec": 0, 00:10:09.057 "rw_mbytes_per_sec": 0, 00:10:09.057 "r_mbytes_per_sec": 0, 00:10:09.057 "w_mbytes_per_sec": 0 00:10:09.057 }, 00:10:09.057 "claimed": true, 00:10:09.057 "claim_type": "exclusive_write", 00:10:09.057 "zoned": false, 00:10:09.057 "supported_io_types": { 00:10:09.057 "read": true, 00:10:09.057 "write": true, 00:10:09.057 "unmap": true, 00:10:09.057 "flush": true, 00:10:09.057 "reset": true, 00:10:09.057 "nvme_admin": false, 00:10:09.057 "nvme_io": false, 00:10:09.057 "nvme_io_md": false, 00:10:09.057 "write_zeroes": true, 00:10:09.057 "zcopy": true, 00:10:09.057 "get_zone_info": false, 00:10:09.057 "zone_management": false, 00:10:09.057 "zone_append": false, 00:10:09.057 "compare": false, 00:10:09.057 "compare_and_write": false, 00:10:09.057 "abort": true, 00:10:09.057 "seek_hole": false, 00:10:09.057 "seek_data": false, 00:10:09.057 "copy": true, 00:10:09.057 "nvme_iov_md": false 00:10:09.057 }, 00:10:09.057 "memory_domains": [ 00:10:09.057 { 00:10:09.057 "dma_device_id": "system", 00:10:09.057 "dma_device_type": 1 00:10:09.057 }, 00:10:09.057 { 00:10:09.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.057 "dma_device_type": 2 00:10:09.057 } 00:10:09.057 ], 00:10:09.057 "driver_specific": {} 00:10:09.057 } 00:10:09.057 ] 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.057 "name": "Existed_Raid", 00:10:09.057 "uuid": "7a1048ba-9572-4fec-a98f-72f3a91b79d6", 00:10:09.057 "strip_size_kb": 64, 00:10:09.057 "state": "online", 00:10:09.057 "raid_level": "raid0", 00:10:09.057 "superblock": false, 00:10:09.057 "num_base_bdevs": 4, 00:10:09.057 "num_base_bdevs_discovered": 4, 00:10:09.057 "num_base_bdevs_operational": 4, 00:10:09.057 "base_bdevs_list": [ 00:10:09.057 { 00:10:09.057 "name": "NewBaseBdev", 00:10:09.057 "uuid": "cc6653eb-7936-4484-b828-de18ce9b57e5", 00:10:09.057 "is_configured": true, 00:10:09.057 "data_offset": 0, 00:10:09.057 "data_size": 65536 00:10:09.057 }, 00:10:09.057 { 00:10:09.057 "name": "BaseBdev2", 00:10:09.057 "uuid": "1ff41a82-759c-4236-b96f-ac6bcc92d66f", 00:10:09.057 "is_configured": true, 00:10:09.057 "data_offset": 0, 00:10:09.057 "data_size": 65536 00:10:09.057 }, 00:10:09.057 { 00:10:09.057 "name": "BaseBdev3", 00:10:09.057 "uuid": "96f3d274-0994-475c-b4fd-d74d4b8c753a", 00:10:09.057 "is_configured": true, 00:10:09.057 "data_offset": 0, 00:10:09.057 "data_size": 65536 00:10:09.057 }, 00:10:09.057 { 00:10:09.057 "name": "BaseBdev4", 00:10:09.057 "uuid": "562f4d9b-4b22-47b4-8194-da1a24b5e0e2", 00:10:09.057 "is_configured": true, 00:10:09.057 "data_offset": 0, 00:10:09.057 "data_size": 65536 00:10:09.057 } 00:10:09.057 ] 00:10:09.057 }' 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.057 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.627 [2024-12-06 23:44:20.914858] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:09.627 "name": "Existed_Raid", 00:10:09.627 "aliases": [ 00:10:09.627 "7a1048ba-9572-4fec-a98f-72f3a91b79d6" 00:10:09.627 ], 00:10:09.627 "product_name": "Raid Volume", 00:10:09.627 "block_size": 512, 00:10:09.627 "num_blocks": 262144, 00:10:09.627 "uuid": "7a1048ba-9572-4fec-a98f-72f3a91b79d6", 00:10:09.627 "assigned_rate_limits": { 00:10:09.627 "rw_ios_per_sec": 0, 00:10:09.627 "rw_mbytes_per_sec": 0, 00:10:09.627 "r_mbytes_per_sec": 0, 00:10:09.627 "w_mbytes_per_sec": 0 00:10:09.627 }, 00:10:09.627 "claimed": false, 00:10:09.627 "zoned": false, 00:10:09.627 "supported_io_types": { 00:10:09.627 "read": true, 00:10:09.627 "write": true, 00:10:09.627 "unmap": true, 00:10:09.627 "flush": true, 00:10:09.627 "reset": true, 00:10:09.627 "nvme_admin": false, 00:10:09.627 "nvme_io": false, 00:10:09.627 "nvme_io_md": false, 00:10:09.627 "write_zeroes": true, 00:10:09.627 "zcopy": false, 00:10:09.627 "get_zone_info": false, 00:10:09.627 "zone_management": false, 00:10:09.627 "zone_append": false, 00:10:09.627 "compare": false, 00:10:09.627 "compare_and_write": false, 00:10:09.627 "abort": false, 00:10:09.627 "seek_hole": false, 00:10:09.627 "seek_data": false, 00:10:09.627 "copy": false, 00:10:09.627 "nvme_iov_md": false 00:10:09.627 }, 00:10:09.627 "memory_domains": [ 00:10:09.627 { 00:10:09.627 "dma_device_id": "system", 00:10:09.627 "dma_device_type": 1 00:10:09.627 }, 00:10:09.627 { 00:10:09.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.627 "dma_device_type": 2 00:10:09.627 }, 00:10:09.627 { 00:10:09.627 "dma_device_id": "system", 00:10:09.627 "dma_device_type": 1 00:10:09.627 }, 00:10:09.627 { 00:10:09.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.627 "dma_device_type": 2 00:10:09.627 }, 00:10:09.627 { 00:10:09.627 "dma_device_id": "system", 00:10:09.627 "dma_device_type": 1 00:10:09.627 }, 00:10:09.627 { 00:10:09.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.627 "dma_device_type": 2 00:10:09.627 }, 00:10:09.627 { 00:10:09.627 "dma_device_id": "system", 00:10:09.627 "dma_device_type": 1 00:10:09.627 }, 00:10:09.627 { 00:10:09.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.627 "dma_device_type": 2 00:10:09.627 } 00:10:09.627 ], 00:10:09.627 "driver_specific": { 00:10:09.627 "raid": { 00:10:09.627 "uuid": "7a1048ba-9572-4fec-a98f-72f3a91b79d6", 00:10:09.627 "strip_size_kb": 64, 00:10:09.627 "state": "online", 00:10:09.627 "raid_level": "raid0", 00:10:09.627 "superblock": false, 00:10:09.627 "num_base_bdevs": 4, 00:10:09.627 "num_base_bdevs_discovered": 4, 00:10:09.627 "num_base_bdevs_operational": 4, 00:10:09.627 "base_bdevs_list": [ 00:10:09.627 { 00:10:09.627 "name": "NewBaseBdev", 00:10:09.627 "uuid": "cc6653eb-7936-4484-b828-de18ce9b57e5", 00:10:09.627 "is_configured": true, 00:10:09.627 "data_offset": 0, 00:10:09.627 "data_size": 65536 00:10:09.627 }, 00:10:09.627 { 00:10:09.627 "name": "BaseBdev2", 00:10:09.627 "uuid": "1ff41a82-759c-4236-b96f-ac6bcc92d66f", 00:10:09.627 "is_configured": true, 00:10:09.627 "data_offset": 0, 00:10:09.627 "data_size": 65536 00:10:09.627 }, 00:10:09.627 { 00:10:09.627 "name": "BaseBdev3", 00:10:09.627 "uuid": "96f3d274-0994-475c-b4fd-d74d4b8c753a", 00:10:09.627 "is_configured": true, 00:10:09.627 "data_offset": 0, 00:10:09.627 "data_size": 65536 00:10:09.627 }, 00:10:09.627 { 00:10:09.627 "name": "BaseBdev4", 00:10:09.627 "uuid": "562f4d9b-4b22-47b4-8194-da1a24b5e0e2", 00:10:09.627 "is_configured": true, 00:10:09.627 "data_offset": 0, 00:10:09.627 "data_size": 65536 00:10:09.627 } 00:10:09.627 ] 00:10:09.627 } 00:10:09.627 } 00:10:09.627 }' 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:09.627 BaseBdev2 00:10:09.627 BaseBdev3 00:10:09.627 BaseBdev4' 00:10:09.627 23:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.627 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:09.627 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.627 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:09.627 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.627 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.627 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.628 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.888 [2024-12-06 23:44:21.217905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.888 [2024-12-06 23:44:21.218028] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.888 [2024-12-06 23:44:21.218155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.888 [2024-12-06 23:44:21.218257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.888 [2024-12-06 23:44:21.218301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69296 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69296 ']' 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69296 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69296 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69296' 00:10:09.888 killing process with pid 69296 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69296 00:10:09.888 [2024-12-06 23:44:21.268246] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:09.888 23:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69296 00:10:10.148 [2024-12-06 23:44:21.701925] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.526 23:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:11.526 00:10:11.526 real 0m11.761s 00:10:11.526 user 0m18.364s 00:10:11.526 sys 0m2.124s 00:10:11.526 23:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.526 ************************************ 00:10:11.526 END TEST raid_state_function_test 00:10:11.526 23:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.526 ************************************ 00:10:11.526 23:44:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:11.526 23:44:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:11.526 23:44:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.526 23:44:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.526 ************************************ 00:10:11.526 START TEST raid_state_function_test_sb 00:10:11.526 ************************************ 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:11.526 Process raid pid: 69963 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69963 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69963' 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69963 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69963 ']' 00:10:11.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.526 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.785 [2024-12-06 23:44:23.125579] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:10:11.785 [2024-12-06 23:44:23.125706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.785 [2024-12-06 23:44:23.281893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.044 [2024-12-06 23:44:23.422594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.303 [2024-12-06 23:44:23.666147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.303 [2024-12-06 23:44:23.666308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.562 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.562 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:12.562 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.562 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.562 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.562 [2024-12-06 23:44:23.960468] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.562 [2024-12-06 23:44:23.960543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.562 [2024-12-06 23:44:23.960554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.562 [2024-12-06 23:44:23.960565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.562 [2024-12-06 23:44:23.960571] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.562 [2024-12-06 23:44:23.960581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.562 [2024-12-06 23:44:23.960587] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:12.562 [2024-12-06 23:44:23.960596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:12.562 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.562 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.562 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.563 23:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.563 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.563 "name": "Existed_Raid", 00:10:12.563 "uuid": "6f0aaf2d-2294-4173-b04e-8ecdfc890d0a", 00:10:12.563 "strip_size_kb": 64, 00:10:12.563 "state": "configuring", 00:10:12.563 "raid_level": "raid0", 00:10:12.563 "superblock": true, 00:10:12.563 "num_base_bdevs": 4, 00:10:12.563 "num_base_bdevs_discovered": 0, 00:10:12.563 "num_base_bdevs_operational": 4, 00:10:12.563 "base_bdevs_list": [ 00:10:12.563 { 00:10:12.563 "name": "BaseBdev1", 00:10:12.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.563 "is_configured": false, 00:10:12.563 "data_offset": 0, 00:10:12.563 "data_size": 0 00:10:12.563 }, 00:10:12.563 { 00:10:12.563 "name": "BaseBdev2", 00:10:12.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.563 "is_configured": false, 00:10:12.563 "data_offset": 0, 00:10:12.563 "data_size": 0 00:10:12.563 }, 00:10:12.563 { 00:10:12.563 "name": "BaseBdev3", 00:10:12.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.563 "is_configured": false, 00:10:12.563 "data_offset": 0, 00:10:12.563 "data_size": 0 00:10:12.563 }, 00:10:12.563 { 00:10:12.563 "name": "BaseBdev4", 00:10:12.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.563 "is_configured": false, 00:10:12.563 "data_offset": 0, 00:10:12.563 "data_size": 0 00:10:12.563 } 00:10:12.563 ] 00:10:12.563 }' 00:10:12.563 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.563 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.823 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.823 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.823 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.082 [2024-12-06 23:44:24.387705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.082 [2024-12-06 23:44:24.387843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:13.082 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.082 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.082 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.082 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.082 [2024-12-06 23:44:24.399691] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.082 [2024-12-06 23:44:24.399824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.082 [2024-12-06 23:44:24.399858] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.082 [2024-12-06 23:44:24.399883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.082 [2024-12-06 23:44:24.399911] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.082 [2024-12-06 23:44:24.399936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.082 [2024-12-06 23:44:24.399961] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.082 [2024-12-06 23:44:24.400013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.082 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.082 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.082 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.082 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.082 [2024-12-06 23:44:24.455176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.083 BaseBdev1 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.083 [ 00:10:13.083 { 00:10:13.083 "name": "BaseBdev1", 00:10:13.083 "aliases": [ 00:10:13.083 "79595016-5881-4c35-9030-72531d367bca" 00:10:13.083 ], 00:10:13.083 "product_name": "Malloc disk", 00:10:13.083 "block_size": 512, 00:10:13.083 "num_blocks": 65536, 00:10:13.083 "uuid": "79595016-5881-4c35-9030-72531d367bca", 00:10:13.083 "assigned_rate_limits": { 00:10:13.083 "rw_ios_per_sec": 0, 00:10:13.083 "rw_mbytes_per_sec": 0, 00:10:13.083 "r_mbytes_per_sec": 0, 00:10:13.083 "w_mbytes_per_sec": 0 00:10:13.083 }, 00:10:13.083 "claimed": true, 00:10:13.083 "claim_type": "exclusive_write", 00:10:13.083 "zoned": false, 00:10:13.083 "supported_io_types": { 00:10:13.083 "read": true, 00:10:13.083 "write": true, 00:10:13.083 "unmap": true, 00:10:13.083 "flush": true, 00:10:13.083 "reset": true, 00:10:13.083 "nvme_admin": false, 00:10:13.083 "nvme_io": false, 00:10:13.083 "nvme_io_md": false, 00:10:13.083 "write_zeroes": true, 00:10:13.083 "zcopy": true, 00:10:13.083 "get_zone_info": false, 00:10:13.083 "zone_management": false, 00:10:13.083 "zone_append": false, 00:10:13.083 "compare": false, 00:10:13.083 "compare_and_write": false, 00:10:13.083 "abort": true, 00:10:13.083 "seek_hole": false, 00:10:13.083 "seek_data": false, 00:10:13.083 "copy": true, 00:10:13.083 "nvme_iov_md": false 00:10:13.083 }, 00:10:13.083 "memory_domains": [ 00:10:13.083 { 00:10:13.083 "dma_device_id": "system", 00:10:13.083 "dma_device_type": 1 00:10:13.083 }, 00:10:13.083 { 00:10:13.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.083 "dma_device_type": 2 00:10:13.083 } 00:10:13.083 ], 00:10:13.083 "driver_specific": {} 00:10:13.083 } 00:10:13.083 ] 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.083 "name": "Existed_Raid", 00:10:13.083 "uuid": "405ca57c-05d9-4eba-84cb-77cb0f2ba596", 00:10:13.083 "strip_size_kb": 64, 00:10:13.083 "state": "configuring", 00:10:13.083 "raid_level": "raid0", 00:10:13.083 "superblock": true, 00:10:13.083 "num_base_bdevs": 4, 00:10:13.083 "num_base_bdevs_discovered": 1, 00:10:13.083 "num_base_bdevs_operational": 4, 00:10:13.083 "base_bdevs_list": [ 00:10:13.083 { 00:10:13.083 "name": "BaseBdev1", 00:10:13.083 "uuid": "79595016-5881-4c35-9030-72531d367bca", 00:10:13.083 "is_configured": true, 00:10:13.083 "data_offset": 2048, 00:10:13.083 "data_size": 63488 00:10:13.083 }, 00:10:13.083 { 00:10:13.083 "name": "BaseBdev2", 00:10:13.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.083 "is_configured": false, 00:10:13.083 "data_offset": 0, 00:10:13.083 "data_size": 0 00:10:13.083 }, 00:10:13.083 { 00:10:13.083 "name": "BaseBdev3", 00:10:13.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.083 "is_configured": false, 00:10:13.083 "data_offset": 0, 00:10:13.083 "data_size": 0 00:10:13.083 }, 00:10:13.083 { 00:10:13.083 "name": "BaseBdev4", 00:10:13.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.083 "is_configured": false, 00:10:13.083 "data_offset": 0, 00:10:13.083 "data_size": 0 00:10:13.083 } 00:10:13.083 ] 00:10:13.083 }' 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.083 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.651 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.651 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.652 [2024-12-06 23:44:24.930465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.652 [2024-12-06 23:44:24.930631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.652 [2024-12-06 23:44:24.942475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.652 [2024-12-06 23:44:24.944793] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.652 [2024-12-06 23:44:24.944872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.652 [2024-12-06 23:44:24.944900] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.652 [2024-12-06 23:44:24.944925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.652 [2024-12-06 23:44:24.944943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.652 [2024-12-06 23:44:24.944963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.652 23:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.652 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.652 "name": "Existed_Raid", 00:10:13.652 "uuid": "33f257fe-e253-4a51-9d66-83c96e58eaa6", 00:10:13.652 "strip_size_kb": 64, 00:10:13.652 "state": "configuring", 00:10:13.652 "raid_level": "raid0", 00:10:13.652 "superblock": true, 00:10:13.652 "num_base_bdevs": 4, 00:10:13.652 "num_base_bdevs_discovered": 1, 00:10:13.652 "num_base_bdevs_operational": 4, 00:10:13.652 "base_bdevs_list": [ 00:10:13.652 { 00:10:13.652 "name": "BaseBdev1", 00:10:13.652 "uuid": "79595016-5881-4c35-9030-72531d367bca", 00:10:13.652 "is_configured": true, 00:10:13.652 "data_offset": 2048, 00:10:13.652 "data_size": 63488 00:10:13.652 }, 00:10:13.652 { 00:10:13.652 "name": "BaseBdev2", 00:10:13.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.652 "is_configured": false, 00:10:13.652 "data_offset": 0, 00:10:13.652 "data_size": 0 00:10:13.652 }, 00:10:13.652 { 00:10:13.652 "name": "BaseBdev3", 00:10:13.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.652 "is_configured": false, 00:10:13.652 "data_offset": 0, 00:10:13.652 "data_size": 0 00:10:13.652 }, 00:10:13.652 { 00:10:13.652 "name": "BaseBdev4", 00:10:13.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.652 "is_configured": false, 00:10:13.652 "data_offset": 0, 00:10:13.652 "data_size": 0 00:10:13.652 } 00:10:13.652 ] 00:10:13.652 }' 00:10:13.652 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.652 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.912 [2024-12-06 23:44:25.427446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.912 BaseBdev2 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.912 [ 00:10:13.912 { 00:10:13.912 "name": "BaseBdev2", 00:10:13.912 "aliases": [ 00:10:13.912 "2da2257f-a04c-42c1-93f0-949dbf316027" 00:10:13.912 ], 00:10:13.912 "product_name": "Malloc disk", 00:10:13.912 "block_size": 512, 00:10:13.912 "num_blocks": 65536, 00:10:13.912 "uuid": "2da2257f-a04c-42c1-93f0-949dbf316027", 00:10:13.912 "assigned_rate_limits": { 00:10:13.912 "rw_ios_per_sec": 0, 00:10:13.912 "rw_mbytes_per_sec": 0, 00:10:13.912 "r_mbytes_per_sec": 0, 00:10:13.912 "w_mbytes_per_sec": 0 00:10:13.912 }, 00:10:13.912 "claimed": true, 00:10:13.912 "claim_type": "exclusive_write", 00:10:13.912 "zoned": false, 00:10:13.912 "supported_io_types": { 00:10:13.912 "read": true, 00:10:13.912 "write": true, 00:10:13.912 "unmap": true, 00:10:13.912 "flush": true, 00:10:13.912 "reset": true, 00:10:13.912 "nvme_admin": false, 00:10:13.912 "nvme_io": false, 00:10:13.912 "nvme_io_md": false, 00:10:13.912 "write_zeroes": true, 00:10:13.912 "zcopy": true, 00:10:13.912 "get_zone_info": false, 00:10:13.912 "zone_management": false, 00:10:13.912 "zone_append": false, 00:10:13.912 "compare": false, 00:10:13.912 "compare_and_write": false, 00:10:13.912 "abort": true, 00:10:13.912 "seek_hole": false, 00:10:13.912 "seek_data": false, 00:10:13.912 "copy": true, 00:10:13.912 "nvme_iov_md": false 00:10:13.912 }, 00:10:13.912 "memory_domains": [ 00:10:13.912 { 00:10:13.912 "dma_device_id": "system", 00:10:13.912 "dma_device_type": 1 00:10:13.912 }, 00:10:13.912 { 00:10:13.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.912 "dma_device_type": 2 00:10:13.912 } 00:10:13.912 ], 00:10:13.912 "driver_specific": {} 00:10:13.912 } 00:10:13.912 ] 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.912 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.170 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.170 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.170 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.170 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.170 "name": "Existed_Raid", 00:10:14.170 "uuid": "33f257fe-e253-4a51-9d66-83c96e58eaa6", 00:10:14.170 "strip_size_kb": 64, 00:10:14.170 "state": "configuring", 00:10:14.170 "raid_level": "raid0", 00:10:14.170 "superblock": true, 00:10:14.170 "num_base_bdevs": 4, 00:10:14.170 "num_base_bdevs_discovered": 2, 00:10:14.170 "num_base_bdevs_operational": 4, 00:10:14.170 "base_bdevs_list": [ 00:10:14.170 { 00:10:14.170 "name": "BaseBdev1", 00:10:14.170 "uuid": "79595016-5881-4c35-9030-72531d367bca", 00:10:14.170 "is_configured": true, 00:10:14.170 "data_offset": 2048, 00:10:14.170 "data_size": 63488 00:10:14.170 }, 00:10:14.170 { 00:10:14.170 "name": "BaseBdev2", 00:10:14.170 "uuid": "2da2257f-a04c-42c1-93f0-949dbf316027", 00:10:14.170 "is_configured": true, 00:10:14.170 "data_offset": 2048, 00:10:14.170 "data_size": 63488 00:10:14.170 }, 00:10:14.170 { 00:10:14.170 "name": "BaseBdev3", 00:10:14.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.170 "is_configured": false, 00:10:14.170 "data_offset": 0, 00:10:14.170 "data_size": 0 00:10:14.170 }, 00:10:14.170 { 00:10:14.170 "name": "BaseBdev4", 00:10:14.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.170 "is_configured": false, 00:10:14.170 "data_offset": 0, 00:10:14.170 "data_size": 0 00:10:14.170 } 00:10:14.170 ] 00:10:14.170 }' 00:10:14.170 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.170 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.429 [2024-12-06 23:44:25.907215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.429 BaseBdev3 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.429 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.429 [ 00:10:14.429 { 00:10:14.429 "name": "BaseBdev3", 00:10:14.429 "aliases": [ 00:10:14.429 "c3f5af64-9f7b-49bb-84a9-a9d024c602b1" 00:10:14.429 ], 00:10:14.429 "product_name": "Malloc disk", 00:10:14.429 "block_size": 512, 00:10:14.429 "num_blocks": 65536, 00:10:14.429 "uuid": "c3f5af64-9f7b-49bb-84a9-a9d024c602b1", 00:10:14.429 "assigned_rate_limits": { 00:10:14.429 "rw_ios_per_sec": 0, 00:10:14.429 "rw_mbytes_per_sec": 0, 00:10:14.429 "r_mbytes_per_sec": 0, 00:10:14.429 "w_mbytes_per_sec": 0 00:10:14.429 }, 00:10:14.429 "claimed": true, 00:10:14.429 "claim_type": "exclusive_write", 00:10:14.429 "zoned": false, 00:10:14.429 "supported_io_types": { 00:10:14.429 "read": true, 00:10:14.429 "write": true, 00:10:14.429 "unmap": true, 00:10:14.429 "flush": true, 00:10:14.429 "reset": true, 00:10:14.429 "nvme_admin": false, 00:10:14.429 "nvme_io": false, 00:10:14.429 "nvme_io_md": false, 00:10:14.429 "write_zeroes": true, 00:10:14.429 "zcopy": true, 00:10:14.429 "get_zone_info": false, 00:10:14.429 "zone_management": false, 00:10:14.429 "zone_append": false, 00:10:14.429 "compare": false, 00:10:14.429 "compare_and_write": false, 00:10:14.429 "abort": true, 00:10:14.429 "seek_hole": false, 00:10:14.429 "seek_data": false, 00:10:14.429 "copy": true, 00:10:14.429 "nvme_iov_md": false 00:10:14.429 }, 00:10:14.429 "memory_domains": [ 00:10:14.429 { 00:10:14.429 "dma_device_id": "system", 00:10:14.429 "dma_device_type": 1 00:10:14.429 }, 00:10:14.429 { 00:10:14.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.429 "dma_device_type": 2 00:10:14.429 } 00:10:14.429 ], 00:10:14.429 "driver_specific": {} 00:10:14.429 } 00:10:14.429 ] 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.430 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.689 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.689 "name": "Existed_Raid", 00:10:14.689 "uuid": "33f257fe-e253-4a51-9d66-83c96e58eaa6", 00:10:14.689 "strip_size_kb": 64, 00:10:14.689 "state": "configuring", 00:10:14.689 "raid_level": "raid0", 00:10:14.689 "superblock": true, 00:10:14.689 "num_base_bdevs": 4, 00:10:14.689 "num_base_bdevs_discovered": 3, 00:10:14.689 "num_base_bdevs_operational": 4, 00:10:14.689 "base_bdevs_list": [ 00:10:14.689 { 00:10:14.689 "name": "BaseBdev1", 00:10:14.689 "uuid": "79595016-5881-4c35-9030-72531d367bca", 00:10:14.689 "is_configured": true, 00:10:14.689 "data_offset": 2048, 00:10:14.689 "data_size": 63488 00:10:14.689 }, 00:10:14.689 { 00:10:14.689 "name": "BaseBdev2", 00:10:14.689 "uuid": "2da2257f-a04c-42c1-93f0-949dbf316027", 00:10:14.689 "is_configured": true, 00:10:14.689 "data_offset": 2048, 00:10:14.689 "data_size": 63488 00:10:14.689 }, 00:10:14.689 { 00:10:14.689 "name": "BaseBdev3", 00:10:14.689 "uuid": "c3f5af64-9f7b-49bb-84a9-a9d024c602b1", 00:10:14.689 "is_configured": true, 00:10:14.689 "data_offset": 2048, 00:10:14.689 "data_size": 63488 00:10:14.689 }, 00:10:14.689 { 00:10:14.689 "name": "BaseBdev4", 00:10:14.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.689 "is_configured": false, 00:10:14.689 "data_offset": 0, 00:10:14.689 "data_size": 0 00:10:14.689 } 00:10:14.689 ] 00:10:14.689 }' 00:10:14.689 23:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.689 23:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.949 [2024-12-06 23:44:26.429855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:14.949 [2024-12-06 23:44:26.430236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:14.949 [2024-12-06 23:44:26.430287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:14.949 [2024-12-06 23:44:26.430603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:14.949 BaseBdev4 00:10:14.949 [2024-12-06 23:44:26.430804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:14.949 [2024-12-06 23:44:26.430818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:14.949 [2024-12-06 23:44:26.431003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.949 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.950 [ 00:10:14.950 { 00:10:14.950 "name": "BaseBdev4", 00:10:14.950 "aliases": [ 00:10:14.950 "3b010b5c-9618-4cca-8e9a-95e8e10528b6" 00:10:14.950 ], 00:10:14.950 "product_name": "Malloc disk", 00:10:14.950 "block_size": 512, 00:10:14.950 "num_blocks": 65536, 00:10:14.950 "uuid": "3b010b5c-9618-4cca-8e9a-95e8e10528b6", 00:10:14.950 "assigned_rate_limits": { 00:10:14.950 "rw_ios_per_sec": 0, 00:10:14.950 "rw_mbytes_per_sec": 0, 00:10:14.950 "r_mbytes_per_sec": 0, 00:10:14.950 "w_mbytes_per_sec": 0 00:10:14.950 }, 00:10:14.950 "claimed": true, 00:10:14.950 "claim_type": "exclusive_write", 00:10:14.950 "zoned": false, 00:10:14.950 "supported_io_types": { 00:10:14.950 "read": true, 00:10:14.950 "write": true, 00:10:14.950 "unmap": true, 00:10:14.950 "flush": true, 00:10:14.950 "reset": true, 00:10:14.950 "nvme_admin": false, 00:10:14.950 "nvme_io": false, 00:10:14.950 "nvme_io_md": false, 00:10:14.950 "write_zeroes": true, 00:10:14.950 "zcopy": true, 00:10:14.950 "get_zone_info": false, 00:10:14.950 "zone_management": false, 00:10:14.950 "zone_append": false, 00:10:14.950 "compare": false, 00:10:14.950 "compare_and_write": false, 00:10:14.950 "abort": true, 00:10:14.950 "seek_hole": false, 00:10:14.950 "seek_data": false, 00:10:14.950 "copy": true, 00:10:14.950 "nvme_iov_md": false 00:10:14.950 }, 00:10:14.950 "memory_domains": [ 00:10:14.950 { 00:10:14.950 "dma_device_id": "system", 00:10:14.950 "dma_device_type": 1 00:10:14.950 }, 00:10:14.950 { 00:10:14.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.950 "dma_device_type": 2 00:10:14.950 } 00:10:14.950 ], 00:10:14.950 "driver_specific": {} 00:10:14.950 } 00:10:14.950 ] 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.950 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.209 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.209 "name": "Existed_Raid", 00:10:15.209 "uuid": "33f257fe-e253-4a51-9d66-83c96e58eaa6", 00:10:15.209 "strip_size_kb": 64, 00:10:15.209 "state": "online", 00:10:15.209 "raid_level": "raid0", 00:10:15.209 "superblock": true, 00:10:15.209 "num_base_bdevs": 4, 00:10:15.209 "num_base_bdevs_discovered": 4, 00:10:15.209 "num_base_bdevs_operational": 4, 00:10:15.209 "base_bdevs_list": [ 00:10:15.209 { 00:10:15.209 "name": "BaseBdev1", 00:10:15.209 "uuid": "79595016-5881-4c35-9030-72531d367bca", 00:10:15.209 "is_configured": true, 00:10:15.209 "data_offset": 2048, 00:10:15.209 "data_size": 63488 00:10:15.209 }, 00:10:15.209 { 00:10:15.209 "name": "BaseBdev2", 00:10:15.209 "uuid": "2da2257f-a04c-42c1-93f0-949dbf316027", 00:10:15.209 "is_configured": true, 00:10:15.209 "data_offset": 2048, 00:10:15.209 "data_size": 63488 00:10:15.209 }, 00:10:15.209 { 00:10:15.210 "name": "BaseBdev3", 00:10:15.210 "uuid": "c3f5af64-9f7b-49bb-84a9-a9d024c602b1", 00:10:15.210 "is_configured": true, 00:10:15.210 "data_offset": 2048, 00:10:15.210 "data_size": 63488 00:10:15.210 }, 00:10:15.210 { 00:10:15.210 "name": "BaseBdev4", 00:10:15.210 "uuid": "3b010b5c-9618-4cca-8e9a-95e8e10528b6", 00:10:15.210 "is_configured": true, 00:10:15.210 "data_offset": 2048, 00:10:15.210 "data_size": 63488 00:10:15.210 } 00:10:15.210 ] 00:10:15.210 }' 00:10:15.210 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.210 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.470 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.470 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:15.470 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.470 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.470 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.470 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.470 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:15.470 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.470 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.470 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.470 [2024-12-06 23:44:26.921490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.470 23:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.470 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.470 "name": "Existed_Raid", 00:10:15.470 "aliases": [ 00:10:15.470 "33f257fe-e253-4a51-9d66-83c96e58eaa6" 00:10:15.470 ], 00:10:15.470 "product_name": "Raid Volume", 00:10:15.470 "block_size": 512, 00:10:15.470 "num_blocks": 253952, 00:10:15.470 "uuid": "33f257fe-e253-4a51-9d66-83c96e58eaa6", 00:10:15.470 "assigned_rate_limits": { 00:10:15.470 "rw_ios_per_sec": 0, 00:10:15.470 "rw_mbytes_per_sec": 0, 00:10:15.470 "r_mbytes_per_sec": 0, 00:10:15.470 "w_mbytes_per_sec": 0 00:10:15.470 }, 00:10:15.470 "claimed": false, 00:10:15.470 "zoned": false, 00:10:15.470 "supported_io_types": { 00:10:15.470 "read": true, 00:10:15.470 "write": true, 00:10:15.470 "unmap": true, 00:10:15.470 "flush": true, 00:10:15.470 "reset": true, 00:10:15.470 "nvme_admin": false, 00:10:15.470 "nvme_io": false, 00:10:15.470 "nvme_io_md": false, 00:10:15.470 "write_zeroes": true, 00:10:15.470 "zcopy": false, 00:10:15.470 "get_zone_info": false, 00:10:15.470 "zone_management": false, 00:10:15.470 "zone_append": false, 00:10:15.470 "compare": false, 00:10:15.470 "compare_and_write": false, 00:10:15.470 "abort": false, 00:10:15.470 "seek_hole": false, 00:10:15.470 "seek_data": false, 00:10:15.470 "copy": false, 00:10:15.470 "nvme_iov_md": false 00:10:15.470 }, 00:10:15.470 "memory_domains": [ 00:10:15.470 { 00:10:15.470 "dma_device_id": "system", 00:10:15.470 "dma_device_type": 1 00:10:15.470 }, 00:10:15.470 { 00:10:15.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.470 "dma_device_type": 2 00:10:15.470 }, 00:10:15.470 { 00:10:15.470 "dma_device_id": "system", 00:10:15.470 "dma_device_type": 1 00:10:15.470 }, 00:10:15.470 { 00:10:15.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.470 "dma_device_type": 2 00:10:15.470 }, 00:10:15.470 { 00:10:15.470 "dma_device_id": "system", 00:10:15.470 "dma_device_type": 1 00:10:15.470 }, 00:10:15.470 { 00:10:15.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.470 "dma_device_type": 2 00:10:15.470 }, 00:10:15.470 { 00:10:15.470 "dma_device_id": "system", 00:10:15.470 "dma_device_type": 1 00:10:15.470 }, 00:10:15.470 { 00:10:15.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.470 "dma_device_type": 2 00:10:15.470 } 00:10:15.470 ], 00:10:15.470 "driver_specific": { 00:10:15.470 "raid": { 00:10:15.470 "uuid": "33f257fe-e253-4a51-9d66-83c96e58eaa6", 00:10:15.470 "strip_size_kb": 64, 00:10:15.470 "state": "online", 00:10:15.470 "raid_level": "raid0", 00:10:15.470 "superblock": true, 00:10:15.470 "num_base_bdevs": 4, 00:10:15.470 "num_base_bdevs_discovered": 4, 00:10:15.471 "num_base_bdevs_operational": 4, 00:10:15.471 "base_bdevs_list": [ 00:10:15.471 { 00:10:15.471 "name": "BaseBdev1", 00:10:15.471 "uuid": "79595016-5881-4c35-9030-72531d367bca", 00:10:15.471 "is_configured": true, 00:10:15.471 "data_offset": 2048, 00:10:15.471 "data_size": 63488 00:10:15.471 }, 00:10:15.471 { 00:10:15.471 "name": "BaseBdev2", 00:10:15.471 "uuid": "2da2257f-a04c-42c1-93f0-949dbf316027", 00:10:15.471 "is_configured": true, 00:10:15.471 "data_offset": 2048, 00:10:15.471 "data_size": 63488 00:10:15.471 }, 00:10:15.471 { 00:10:15.471 "name": "BaseBdev3", 00:10:15.471 "uuid": "c3f5af64-9f7b-49bb-84a9-a9d024c602b1", 00:10:15.471 "is_configured": true, 00:10:15.471 "data_offset": 2048, 00:10:15.471 "data_size": 63488 00:10:15.471 }, 00:10:15.471 { 00:10:15.471 "name": "BaseBdev4", 00:10:15.471 "uuid": "3b010b5c-9618-4cca-8e9a-95e8e10528b6", 00:10:15.471 "is_configured": true, 00:10:15.471 "data_offset": 2048, 00:10:15.471 "data_size": 63488 00:10:15.471 } 00:10:15.471 ] 00:10:15.471 } 00:10:15.471 } 00:10:15.471 }' 00:10:15.471 23:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.471 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:15.471 BaseBdev2 00:10:15.471 BaseBdev3 00:10:15.471 BaseBdev4' 00:10:15.471 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.734 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.734 [2024-12-06 23:44:27.256556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.734 [2024-12-06 23:44:27.256690] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.734 [2024-12-06 23:44:27.256776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.002 "name": "Existed_Raid", 00:10:16.002 "uuid": "33f257fe-e253-4a51-9d66-83c96e58eaa6", 00:10:16.002 "strip_size_kb": 64, 00:10:16.002 "state": "offline", 00:10:16.002 "raid_level": "raid0", 00:10:16.002 "superblock": true, 00:10:16.002 "num_base_bdevs": 4, 00:10:16.002 "num_base_bdevs_discovered": 3, 00:10:16.002 "num_base_bdevs_operational": 3, 00:10:16.002 "base_bdevs_list": [ 00:10:16.002 { 00:10:16.002 "name": null, 00:10:16.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.002 "is_configured": false, 00:10:16.002 "data_offset": 0, 00:10:16.002 "data_size": 63488 00:10:16.002 }, 00:10:16.002 { 00:10:16.002 "name": "BaseBdev2", 00:10:16.002 "uuid": "2da2257f-a04c-42c1-93f0-949dbf316027", 00:10:16.002 "is_configured": true, 00:10:16.002 "data_offset": 2048, 00:10:16.002 "data_size": 63488 00:10:16.002 }, 00:10:16.002 { 00:10:16.002 "name": "BaseBdev3", 00:10:16.002 "uuid": "c3f5af64-9f7b-49bb-84a9-a9d024c602b1", 00:10:16.002 "is_configured": true, 00:10:16.002 "data_offset": 2048, 00:10:16.002 "data_size": 63488 00:10:16.002 }, 00:10:16.002 { 00:10:16.002 "name": "BaseBdev4", 00:10:16.002 "uuid": "3b010b5c-9618-4cca-8e9a-95e8e10528b6", 00:10:16.002 "is_configured": true, 00:10:16.002 "data_offset": 2048, 00:10:16.002 "data_size": 63488 00:10:16.002 } 00:10:16.002 ] 00:10:16.002 }' 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.002 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.262 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:16.262 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.262 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.262 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.262 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.262 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.523 [2024-12-06 23:44:27.862109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.523 23:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.523 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.523 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.523 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:16.523 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.523 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.523 [2024-12-06 23:44:28.022624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.784 [2024-12-06 23:44:28.188082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:16.784 [2024-12-06 23:44:28.188241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.784 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.044 BaseBdev2 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.044 [ 00:10:17.044 { 00:10:17.044 "name": "BaseBdev2", 00:10:17.044 "aliases": [ 00:10:17.044 "5dfe13b9-7193-44a4-bcb4-b866e28e37be" 00:10:17.044 ], 00:10:17.044 "product_name": "Malloc disk", 00:10:17.044 "block_size": 512, 00:10:17.044 "num_blocks": 65536, 00:10:17.044 "uuid": "5dfe13b9-7193-44a4-bcb4-b866e28e37be", 00:10:17.044 "assigned_rate_limits": { 00:10:17.044 "rw_ios_per_sec": 0, 00:10:17.044 "rw_mbytes_per_sec": 0, 00:10:17.044 "r_mbytes_per_sec": 0, 00:10:17.044 "w_mbytes_per_sec": 0 00:10:17.044 }, 00:10:17.044 "claimed": false, 00:10:17.044 "zoned": false, 00:10:17.044 "supported_io_types": { 00:10:17.044 "read": true, 00:10:17.044 "write": true, 00:10:17.044 "unmap": true, 00:10:17.044 "flush": true, 00:10:17.044 "reset": true, 00:10:17.044 "nvme_admin": false, 00:10:17.044 "nvme_io": false, 00:10:17.044 "nvme_io_md": false, 00:10:17.044 "write_zeroes": true, 00:10:17.044 "zcopy": true, 00:10:17.044 "get_zone_info": false, 00:10:17.044 "zone_management": false, 00:10:17.044 "zone_append": false, 00:10:17.044 "compare": false, 00:10:17.044 "compare_and_write": false, 00:10:17.044 "abort": true, 00:10:17.044 "seek_hole": false, 00:10:17.044 "seek_data": false, 00:10:17.044 "copy": true, 00:10:17.044 "nvme_iov_md": false 00:10:17.044 }, 00:10:17.044 "memory_domains": [ 00:10:17.044 { 00:10:17.044 "dma_device_id": "system", 00:10:17.044 "dma_device_type": 1 00:10:17.044 }, 00:10:17.044 { 00:10:17.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.044 "dma_device_type": 2 00:10:17.044 } 00:10:17.044 ], 00:10:17.044 "driver_specific": {} 00:10:17.044 } 00:10:17.044 ] 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.044 BaseBdev3 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.044 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.045 [ 00:10:17.045 { 00:10:17.045 "name": "BaseBdev3", 00:10:17.045 "aliases": [ 00:10:17.045 "3685f884-f087-41ac-a209-da23692dac67" 00:10:17.045 ], 00:10:17.045 "product_name": "Malloc disk", 00:10:17.045 "block_size": 512, 00:10:17.045 "num_blocks": 65536, 00:10:17.045 "uuid": "3685f884-f087-41ac-a209-da23692dac67", 00:10:17.045 "assigned_rate_limits": { 00:10:17.045 "rw_ios_per_sec": 0, 00:10:17.045 "rw_mbytes_per_sec": 0, 00:10:17.045 "r_mbytes_per_sec": 0, 00:10:17.045 "w_mbytes_per_sec": 0 00:10:17.045 }, 00:10:17.045 "claimed": false, 00:10:17.045 "zoned": false, 00:10:17.045 "supported_io_types": { 00:10:17.045 "read": true, 00:10:17.045 "write": true, 00:10:17.045 "unmap": true, 00:10:17.045 "flush": true, 00:10:17.045 "reset": true, 00:10:17.045 "nvme_admin": false, 00:10:17.045 "nvme_io": false, 00:10:17.045 "nvme_io_md": false, 00:10:17.045 "write_zeroes": true, 00:10:17.045 "zcopy": true, 00:10:17.045 "get_zone_info": false, 00:10:17.045 "zone_management": false, 00:10:17.045 "zone_append": false, 00:10:17.045 "compare": false, 00:10:17.045 "compare_and_write": false, 00:10:17.045 "abort": true, 00:10:17.045 "seek_hole": false, 00:10:17.045 "seek_data": false, 00:10:17.045 "copy": true, 00:10:17.045 "nvme_iov_md": false 00:10:17.045 }, 00:10:17.045 "memory_domains": [ 00:10:17.045 { 00:10:17.045 "dma_device_id": "system", 00:10:17.045 "dma_device_type": 1 00:10:17.045 }, 00:10:17.045 { 00:10:17.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.045 "dma_device_type": 2 00:10:17.045 } 00:10:17.045 ], 00:10:17.045 "driver_specific": {} 00:10:17.045 } 00:10:17.045 ] 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.045 BaseBdev4 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.045 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.045 [ 00:10:17.304 { 00:10:17.304 "name": "BaseBdev4", 00:10:17.304 "aliases": [ 00:10:17.304 "f66ae90b-f193-4174-a5ad-0b0f428c89c6" 00:10:17.304 ], 00:10:17.304 "product_name": "Malloc disk", 00:10:17.304 "block_size": 512, 00:10:17.304 "num_blocks": 65536, 00:10:17.304 "uuid": "f66ae90b-f193-4174-a5ad-0b0f428c89c6", 00:10:17.304 "assigned_rate_limits": { 00:10:17.304 "rw_ios_per_sec": 0, 00:10:17.304 "rw_mbytes_per_sec": 0, 00:10:17.304 "r_mbytes_per_sec": 0, 00:10:17.304 "w_mbytes_per_sec": 0 00:10:17.304 }, 00:10:17.304 "claimed": false, 00:10:17.304 "zoned": false, 00:10:17.304 "supported_io_types": { 00:10:17.305 "read": true, 00:10:17.305 "write": true, 00:10:17.305 "unmap": true, 00:10:17.305 "flush": true, 00:10:17.305 "reset": true, 00:10:17.305 "nvme_admin": false, 00:10:17.305 "nvme_io": false, 00:10:17.305 "nvme_io_md": false, 00:10:17.305 "write_zeroes": true, 00:10:17.305 "zcopy": true, 00:10:17.305 "get_zone_info": false, 00:10:17.305 "zone_management": false, 00:10:17.305 "zone_append": false, 00:10:17.305 "compare": false, 00:10:17.305 "compare_and_write": false, 00:10:17.305 "abort": true, 00:10:17.305 "seek_hole": false, 00:10:17.305 "seek_data": false, 00:10:17.305 "copy": true, 00:10:17.305 "nvme_iov_md": false 00:10:17.305 }, 00:10:17.305 "memory_domains": [ 00:10:17.305 { 00:10:17.305 "dma_device_id": "system", 00:10:17.305 "dma_device_type": 1 00:10:17.305 }, 00:10:17.305 { 00:10:17.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.305 "dma_device_type": 2 00:10:17.305 } 00:10:17.305 ], 00:10:17.305 "driver_specific": {} 00:10:17.305 } 00:10:17.305 ] 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.305 [2024-12-06 23:44:28.619720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.305 [2024-12-06 23:44:28.619846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.305 [2024-12-06 23:44:28.619894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.305 [2024-12-06 23:44:28.622024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.305 [2024-12-06 23:44:28.622118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.305 "name": "Existed_Raid", 00:10:17.305 "uuid": "c7f1de61-1c3d-4e0f-b29a-bde13de7f947", 00:10:17.305 "strip_size_kb": 64, 00:10:17.305 "state": "configuring", 00:10:17.305 "raid_level": "raid0", 00:10:17.305 "superblock": true, 00:10:17.305 "num_base_bdevs": 4, 00:10:17.305 "num_base_bdevs_discovered": 3, 00:10:17.305 "num_base_bdevs_operational": 4, 00:10:17.305 "base_bdevs_list": [ 00:10:17.305 { 00:10:17.305 "name": "BaseBdev1", 00:10:17.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.305 "is_configured": false, 00:10:17.305 "data_offset": 0, 00:10:17.305 "data_size": 0 00:10:17.305 }, 00:10:17.305 { 00:10:17.305 "name": "BaseBdev2", 00:10:17.305 "uuid": "5dfe13b9-7193-44a4-bcb4-b866e28e37be", 00:10:17.305 "is_configured": true, 00:10:17.305 "data_offset": 2048, 00:10:17.305 "data_size": 63488 00:10:17.305 }, 00:10:17.305 { 00:10:17.305 "name": "BaseBdev3", 00:10:17.305 "uuid": "3685f884-f087-41ac-a209-da23692dac67", 00:10:17.305 "is_configured": true, 00:10:17.305 "data_offset": 2048, 00:10:17.305 "data_size": 63488 00:10:17.305 }, 00:10:17.305 { 00:10:17.305 "name": "BaseBdev4", 00:10:17.305 "uuid": "f66ae90b-f193-4174-a5ad-0b0f428c89c6", 00:10:17.305 "is_configured": true, 00:10:17.305 "data_offset": 2048, 00:10:17.305 "data_size": 63488 00:10:17.305 } 00:10:17.305 ] 00:10:17.305 }' 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.305 23:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.566 [2024-12-06 23:44:29.063038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.566 "name": "Existed_Raid", 00:10:17.566 "uuid": "c7f1de61-1c3d-4e0f-b29a-bde13de7f947", 00:10:17.566 "strip_size_kb": 64, 00:10:17.566 "state": "configuring", 00:10:17.566 "raid_level": "raid0", 00:10:17.566 "superblock": true, 00:10:17.566 "num_base_bdevs": 4, 00:10:17.566 "num_base_bdevs_discovered": 2, 00:10:17.566 "num_base_bdevs_operational": 4, 00:10:17.566 "base_bdevs_list": [ 00:10:17.566 { 00:10:17.566 "name": "BaseBdev1", 00:10:17.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.566 "is_configured": false, 00:10:17.566 "data_offset": 0, 00:10:17.566 "data_size": 0 00:10:17.566 }, 00:10:17.566 { 00:10:17.566 "name": null, 00:10:17.566 "uuid": "5dfe13b9-7193-44a4-bcb4-b866e28e37be", 00:10:17.566 "is_configured": false, 00:10:17.566 "data_offset": 0, 00:10:17.566 "data_size": 63488 00:10:17.566 }, 00:10:17.566 { 00:10:17.566 "name": "BaseBdev3", 00:10:17.566 "uuid": "3685f884-f087-41ac-a209-da23692dac67", 00:10:17.566 "is_configured": true, 00:10:17.566 "data_offset": 2048, 00:10:17.566 "data_size": 63488 00:10:17.566 }, 00:10:17.566 { 00:10:17.566 "name": "BaseBdev4", 00:10:17.566 "uuid": "f66ae90b-f193-4174-a5ad-0b0f428c89c6", 00:10:17.566 "is_configured": true, 00:10:17.566 "data_offset": 2048, 00:10:17.566 "data_size": 63488 00:10:17.566 } 00:10:17.566 ] 00:10:17.566 }' 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.566 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.136 [2024-12-06 23:44:29.593963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.136 BaseBdev1 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.136 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.137 [ 00:10:18.137 { 00:10:18.137 "name": "BaseBdev1", 00:10:18.137 "aliases": [ 00:10:18.137 "1ae7e1d0-c8ab-4f28-b814-5533facf1d83" 00:10:18.137 ], 00:10:18.137 "product_name": "Malloc disk", 00:10:18.137 "block_size": 512, 00:10:18.137 "num_blocks": 65536, 00:10:18.137 "uuid": "1ae7e1d0-c8ab-4f28-b814-5533facf1d83", 00:10:18.137 "assigned_rate_limits": { 00:10:18.137 "rw_ios_per_sec": 0, 00:10:18.137 "rw_mbytes_per_sec": 0, 00:10:18.137 "r_mbytes_per_sec": 0, 00:10:18.137 "w_mbytes_per_sec": 0 00:10:18.137 }, 00:10:18.137 "claimed": true, 00:10:18.137 "claim_type": "exclusive_write", 00:10:18.137 "zoned": false, 00:10:18.137 "supported_io_types": { 00:10:18.137 "read": true, 00:10:18.137 "write": true, 00:10:18.137 "unmap": true, 00:10:18.137 "flush": true, 00:10:18.137 "reset": true, 00:10:18.137 "nvme_admin": false, 00:10:18.137 "nvme_io": false, 00:10:18.137 "nvme_io_md": false, 00:10:18.137 "write_zeroes": true, 00:10:18.137 "zcopy": true, 00:10:18.137 "get_zone_info": false, 00:10:18.137 "zone_management": false, 00:10:18.137 "zone_append": false, 00:10:18.137 "compare": false, 00:10:18.137 "compare_and_write": false, 00:10:18.137 "abort": true, 00:10:18.137 "seek_hole": false, 00:10:18.137 "seek_data": false, 00:10:18.137 "copy": true, 00:10:18.137 "nvme_iov_md": false 00:10:18.137 }, 00:10:18.137 "memory_domains": [ 00:10:18.137 { 00:10:18.137 "dma_device_id": "system", 00:10:18.137 "dma_device_type": 1 00:10:18.137 }, 00:10:18.137 { 00:10:18.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.137 "dma_device_type": 2 00:10:18.137 } 00:10:18.137 ], 00:10:18.137 "driver_specific": {} 00:10:18.137 } 00:10:18.137 ] 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.137 "name": "Existed_Raid", 00:10:18.137 "uuid": "c7f1de61-1c3d-4e0f-b29a-bde13de7f947", 00:10:18.137 "strip_size_kb": 64, 00:10:18.137 "state": "configuring", 00:10:18.137 "raid_level": "raid0", 00:10:18.137 "superblock": true, 00:10:18.137 "num_base_bdevs": 4, 00:10:18.137 "num_base_bdevs_discovered": 3, 00:10:18.137 "num_base_bdevs_operational": 4, 00:10:18.137 "base_bdevs_list": [ 00:10:18.137 { 00:10:18.137 "name": "BaseBdev1", 00:10:18.137 "uuid": "1ae7e1d0-c8ab-4f28-b814-5533facf1d83", 00:10:18.137 "is_configured": true, 00:10:18.137 "data_offset": 2048, 00:10:18.137 "data_size": 63488 00:10:18.137 }, 00:10:18.137 { 00:10:18.137 "name": null, 00:10:18.137 "uuid": "5dfe13b9-7193-44a4-bcb4-b866e28e37be", 00:10:18.137 "is_configured": false, 00:10:18.137 "data_offset": 0, 00:10:18.137 "data_size": 63488 00:10:18.137 }, 00:10:18.137 { 00:10:18.137 "name": "BaseBdev3", 00:10:18.137 "uuid": "3685f884-f087-41ac-a209-da23692dac67", 00:10:18.137 "is_configured": true, 00:10:18.137 "data_offset": 2048, 00:10:18.137 "data_size": 63488 00:10:18.137 }, 00:10:18.137 { 00:10:18.137 "name": "BaseBdev4", 00:10:18.137 "uuid": "f66ae90b-f193-4174-a5ad-0b0f428c89c6", 00:10:18.137 "is_configured": true, 00:10:18.137 "data_offset": 2048, 00:10:18.137 "data_size": 63488 00:10:18.137 } 00:10:18.137 ] 00:10:18.137 }' 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.137 23:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.705 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.705 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.705 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.705 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.705 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.706 [2024-12-06 23:44:30.125147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.706 "name": "Existed_Raid", 00:10:18.706 "uuid": "c7f1de61-1c3d-4e0f-b29a-bde13de7f947", 00:10:18.706 "strip_size_kb": 64, 00:10:18.706 "state": "configuring", 00:10:18.706 "raid_level": "raid0", 00:10:18.706 "superblock": true, 00:10:18.706 "num_base_bdevs": 4, 00:10:18.706 "num_base_bdevs_discovered": 2, 00:10:18.706 "num_base_bdevs_operational": 4, 00:10:18.706 "base_bdevs_list": [ 00:10:18.706 { 00:10:18.706 "name": "BaseBdev1", 00:10:18.706 "uuid": "1ae7e1d0-c8ab-4f28-b814-5533facf1d83", 00:10:18.706 "is_configured": true, 00:10:18.706 "data_offset": 2048, 00:10:18.706 "data_size": 63488 00:10:18.706 }, 00:10:18.706 { 00:10:18.706 "name": null, 00:10:18.706 "uuid": "5dfe13b9-7193-44a4-bcb4-b866e28e37be", 00:10:18.706 "is_configured": false, 00:10:18.706 "data_offset": 0, 00:10:18.706 "data_size": 63488 00:10:18.706 }, 00:10:18.706 { 00:10:18.706 "name": null, 00:10:18.706 "uuid": "3685f884-f087-41ac-a209-da23692dac67", 00:10:18.706 "is_configured": false, 00:10:18.706 "data_offset": 0, 00:10:18.706 "data_size": 63488 00:10:18.706 }, 00:10:18.706 { 00:10:18.706 "name": "BaseBdev4", 00:10:18.706 "uuid": "f66ae90b-f193-4174-a5ad-0b0f428c89c6", 00:10:18.706 "is_configured": true, 00:10:18.706 "data_offset": 2048, 00:10:18.706 "data_size": 63488 00:10:18.706 } 00:10:18.706 ] 00:10:18.706 }' 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.706 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.965 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.965 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.965 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.965 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.226 [2024-12-06 23:44:30.572410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.226 "name": "Existed_Raid", 00:10:19.226 "uuid": "c7f1de61-1c3d-4e0f-b29a-bde13de7f947", 00:10:19.226 "strip_size_kb": 64, 00:10:19.226 "state": "configuring", 00:10:19.226 "raid_level": "raid0", 00:10:19.226 "superblock": true, 00:10:19.226 "num_base_bdevs": 4, 00:10:19.226 "num_base_bdevs_discovered": 3, 00:10:19.226 "num_base_bdevs_operational": 4, 00:10:19.226 "base_bdevs_list": [ 00:10:19.226 { 00:10:19.226 "name": "BaseBdev1", 00:10:19.226 "uuid": "1ae7e1d0-c8ab-4f28-b814-5533facf1d83", 00:10:19.226 "is_configured": true, 00:10:19.226 "data_offset": 2048, 00:10:19.226 "data_size": 63488 00:10:19.226 }, 00:10:19.226 { 00:10:19.226 "name": null, 00:10:19.226 "uuid": "5dfe13b9-7193-44a4-bcb4-b866e28e37be", 00:10:19.226 "is_configured": false, 00:10:19.226 "data_offset": 0, 00:10:19.226 "data_size": 63488 00:10:19.226 }, 00:10:19.226 { 00:10:19.226 "name": "BaseBdev3", 00:10:19.226 "uuid": "3685f884-f087-41ac-a209-da23692dac67", 00:10:19.226 "is_configured": true, 00:10:19.226 "data_offset": 2048, 00:10:19.226 "data_size": 63488 00:10:19.226 }, 00:10:19.226 { 00:10:19.226 "name": "BaseBdev4", 00:10:19.226 "uuid": "f66ae90b-f193-4174-a5ad-0b0f428c89c6", 00:10:19.226 "is_configured": true, 00:10:19.226 "data_offset": 2048, 00:10:19.226 "data_size": 63488 00:10:19.226 } 00:10:19.226 ] 00:10:19.226 }' 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.226 23:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.486 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.486 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.486 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.486 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.486 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.745 [2024-12-06 23:44:31.055646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.745 "name": "Existed_Raid", 00:10:19.745 "uuid": "c7f1de61-1c3d-4e0f-b29a-bde13de7f947", 00:10:19.745 "strip_size_kb": 64, 00:10:19.745 "state": "configuring", 00:10:19.745 "raid_level": "raid0", 00:10:19.745 "superblock": true, 00:10:19.745 "num_base_bdevs": 4, 00:10:19.745 "num_base_bdevs_discovered": 2, 00:10:19.745 "num_base_bdevs_operational": 4, 00:10:19.745 "base_bdevs_list": [ 00:10:19.745 { 00:10:19.745 "name": null, 00:10:19.745 "uuid": "1ae7e1d0-c8ab-4f28-b814-5533facf1d83", 00:10:19.745 "is_configured": false, 00:10:19.745 "data_offset": 0, 00:10:19.745 "data_size": 63488 00:10:19.745 }, 00:10:19.745 { 00:10:19.745 "name": null, 00:10:19.745 "uuid": "5dfe13b9-7193-44a4-bcb4-b866e28e37be", 00:10:19.745 "is_configured": false, 00:10:19.745 "data_offset": 0, 00:10:19.745 "data_size": 63488 00:10:19.745 }, 00:10:19.745 { 00:10:19.745 "name": "BaseBdev3", 00:10:19.745 "uuid": "3685f884-f087-41ac-a209-da23692dac67", 00:10:19.745 "is_configured": true, 00:10:19.745 "data_offset": 2048, 00:10:19.745 "data_size": 63488 00:10:19.745 }, 00:10:19.745 { 00:10:19.745 "name": "BaseBdev4", 00:10:19.745 "uuid": "f66ae90b-f193-4174-a5ad-0b0f428c89c6", 00:10:19.745 "is_configured": true, 00:10:19.745 "data_offset": 2048, 00:10:19.745 "data_size": 63488 00:10:19.745 } 00:10:19.745 ] 00:10:19.745 }' 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.745 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.316 [2024-12-06 23:44:31.681497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.316 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.317 "name": "Existed_Raid", 00:10:20.317 "uuid": "c7f1de61-1c3d-4e0f-b29a-bde13de7f947", 00:10:20.317 "strip_size_kb": 64, 00:10:20.317 "state": "configuring", 00:10:20.317 "raid_level": "raid0", 00:10:20.317 "superblock": true, 00:10:20.317 "num_base_bdevs": 4, 00:10:20.317 "num_base_bdevs_discovered": 3, 00:10:20.317 "num_base_bdevs_operational": 4, 00:10:20.317 "base_bdevs_list": [ 00:10:20.317 { 00:10:20.317 "name": null, 00:10:20.317 "uuid": "1ae7e1d0-c8ab-4f28-b814-5533facf1d83", 00:10:20.317 "is_configured": false, 00:10:20.317 "data_offset": 0, 00:10:20.317 "data_size": 63488 00:10:20.317 }, 00:10:20.317 { 00:10:20.317 "name": "BaseBdev2", 00:10:20.317 "uuid": "5dfe13b9-7193-44a4-bcb4-b866e28e37be", 00:10:20.317 "is_configured": true, 00:10:20.317 "data_offset": 2048, 00:10:20.317 "data_size": 63488 00:10:20.317 }, 00:10:20.317 { 00:10:20.317 "name": "BaseBdev3", 00:10:20.317 "uuid": "3685f884-f087-41ac-a209-da23692dac67", 00:10:20.317 "is_configured": true, 00:10:20.317 "data_offset": 2048, 00:10:20.317 "data_size": 63488 00:10:20.317 }, 00:10:20.317 { 00:10:20.317 "name": "BaseBdev4", 00:10:20.317 "uuid": "f66ae90b-f193-4174-a5ad-0b0f428c89c6", 00:10:20.317 "is_configured": true, 00:10:20.317 "data_offset": 2048, 00:10:20.317 "data_size": 63488 00:10:20.317 } 00:10:20.317 ] 00:10:20.317 }' 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.317 23:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.577 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.577 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.577 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.577 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.577 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1ae7e1d0-c8ab-4f28-b814-5533facf1d83 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.837 [2024-12-06 23:44:32.246898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:20.837 [2024-12-06 23:44:32.247257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:20.837 [2024-12-06 23:44:32.247275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.837 [2024-12-06 23:44:32.247553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:20.837 [2024-12-06 23:44:32.247715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:20.837 [2024-12-06 23:44:32.247727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:20.837 NewBaseBdev 00:10:20.837 [2024-12-06 23:44:32.247852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.837 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.837 [ 00:10:20.837 { 00:10:20.837 "name": "NewBaseBdev", 00:10:20.837 "aliases": [ 00:10:20.837 "1ae7e1d0-c8ab-4f28-b814-5533facf1d83" 00:10:20.837 ], 00:10:20.837 "product_name": "Malloc disk", 00:10:20.837 "block_size": 512, 00:10:20.837 "num_blocks": 65536, 00:10:20.837 "uuid": "1ae7e1d0-c8ab-4f28-b814-5533facf1d83", 00:10:20.838 "assigned_rate_limits": { 00:10:20.838 "rw_ios_per_sec": 0, 00:10:20.838 "rw_mbytes_per_sec": 0, 00:10:20.838 "r_mbytes_per_sec": 0, 00:10:20.838 "w_mbytes_per_sec": 0 00:10:20.838 }, 00:10:20.838 "claimed": true, 00:10:20.838 "claim_type": "exclusive_write", 00:10:20.838 "zoned": false, 00:10:20.838 "supported_io_types": { 00:10:20.838 "read": true, 00:10:20.838 "write": true, 00:10:20.838 "unmap": true, 00:10:20.838 "flush": true, 00:10:20.838 "reset": true, 00:10:20.838 "nvme_admin": false, 00:10:20.838 "nvme_io": false, 00:10:20.838 "nvme_io_md": false, 00:10:20.838 "write_zeroes": true, 00:10:20.838 "zcopy": true, 00:10:20.838 "get_zone_info": false, 00:10:20.838 "zone_management": false, 00:10:20.838 "zone_append": false, 00:10:20.838 "compare": false, 00:10:20.838 "compare_and_write": false, 00:10:20.838 "abort": true, 00:10:20.838 "seek_hole": false, 00:10:20.838 "seek_data": false, 00:10:20.838 "copy": true, 00:10:20.838 "nvme_iov_md": false 00:10:20.838 }, 00:10:20.838 "memory_domains": [ 00:10:20.838 { 00:10:20.838 "dma_device_id": "system", 00:10:20.838 "dma_device_type": 1 00:10:20.838 }, 00:10:20.838 { 00:10:20.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.838 "dma_device_type": 2 00:10:20.838 } 00:10:20.838 ], 00:10:20.838 "driver_specific": {} 00:10:20.838 } 00:10:20.838 ] 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.838 "name": "Existed_Raid", 00:10:20.838 "uuid": "c7f1de61-1c3d-4e0f-b29a-bde13de7f947", 00:10:20.838 "strip_size_kb": 64, 00:10:20.838 "state": "online", 00:10:20.838 "raid_level": "raid0", 00:10:20.838 "superblock": true, 00:10:20.838 "num_base_bdevs": 4, 00:10:20.838 "num_base_bdevs_discovered": 4, 00:10:20.838 "num_base_bdevs_operational": 4, 00:10:20.838 "base_bdevs_list": [ 00:10:20.838 { 00:10:20.838 "name": "NewBaseBdev", 00:10:20.838 "uuid": "1ae7e1d0-c8ab-4f28-b814-5533facf1d83", 00:10:20.838 "is_configured": true, 00:10:20.838 "data_offset": 2048, 00:10:20.838 "data_size": 63488 00:10:20.838 }, 00:10:20.838 { 00:10:20.838 "name": "BaseBdev2", 00:10:20.838 "uuid": "5dfe13b9-7193-44a4-bcb4-b866e28e37be", 00:10:20.838 "is_configured": true, 00:10:20.838 "data_offset": 2048, 00:10:20.838 "data_size": 63488 00:10:20.838 }, 00:10:20.838 { 00:10:20.838 "name": "BaseBdev3", 00:10:20.838 "uuid": "3685f884-f087-41ac-a209-da23692dac67", 00:10:20.838 "is_configured": true, 00:10:20.838 "data_offset": 2048, 00:10:20.838 "data_size": 63488 00:10:20.838 }, 00:10:20.838 { 00:10:20.838 "name": "BaseBdev4", 00:10:20.838 "uuid": "f66ae90b-f193-4174-a5ad-0b0f428c89c6", 00:10:20.838 "is_configured": true, 00:10:20.838 "data_offset": 2048, 00:10:20.838 "data_size": 63488 00:10:20.838 } 00:10:20.838 ] 00:10:20.838 }' 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.838 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.408 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.408 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.408 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.408 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.408 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.408 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.408 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:21.408 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.408 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.408 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.408 [2024-12-06 23:44:32.738484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.408 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.408 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.408 "name": "Existed_Raid", 00:10:21.408 "aliases": [ 00:10:21.408 "c7f1de61-1c3d-4e0f-b29a-bde13de7f947" 00:10:21.408 ], 00:10:21.408 "product_name": "Raid Volume", 00:10:21.408 "block_size": 512, 00:10:21.408 "num_blocks": 253952, 00:10:21.408 "uuid": "c7f1de61-1c3d-4e0f-b29a-bde13de7f947", 00:10:21.408 "assigned_rate_limits": { 00:10:21.408 "rw_ios_per_sec": 0, 00:10:21.408 "rw_mbytes_per_sec": 0, 00:10:21.408 "r_mbytes_per_sec": 0, 00:10:21.408 "w_mbytes_per_sec": 0 00:10:21.408 }, 00:10:21.408 "claimed": false, 00:10:21.408 "zoned": false, 00:10:21.408 "supported_io_types": { 00:10:21.408 "read": true, 00:10:21.408 "write": true, 00:10:21.408 "unmap": true, 00:10:21.408 "flush": true, 00:10:21.408 "reset": true, 00:10:21.408 "nvme_admin": false, 00:10:21.408 "nvme_io": false, 00:10:21.408 "nvme_io_md": false, 00:10:21.408 "write_zeroes": true, 00:10:21.408 "zcopy": false, 00:10:21.408 "get_zone_info": false, 00:10:21.408 "zone_management": false, 00:10:21.408 "zone_append": false, 00:10:21.408 "compare": false, 00:10:21.408 "compare_and_write": false, 00:10:21.408 "abort": false, 00:10:21.408 "seek_hole": false, 00:10:21.408 "seek_data": false, 00:10:21.408 "copy": false, 00:10:21.408 "nvme_iov_md": false 00:10:21.408 }, 00:10:21.408 "memory_domains": [ 00:10:21.408 { 00:10:21.408 "dma_device_id": "system", 00:10:21.408 "dma_device_type": 1 00:10:21.408 }, 00:10:21.408 { 00:10:21.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.408 "dma_device_type": 2 00:10:21.408 }, 00:10:21.408 { 00:10:21.408 "dma_device_id": "system", 00:10:21.408 "dma_device_type": 1 00:10:21.409 }, 00:10:21.409 { 00:10:21.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.409 "dma_device_type": 2 00:10:21.409 }, 00:10:21.409 { 00:10:21.409 "dma_device_id": "system", 00:10:21.409 "dma_device_type": 1 00:10:21.409 }, 00:10:21.409 { 00:10:21.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.409 "dma_device_type": 2 00:10:21.409 }, 00:10:21.409 { 00:10:21.409 "dma_device_id": "system", 00:10:21.409 "dma_device_type": 1 00:10:21.409 }, 00:10:21.409 { 00:10:21.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.409 "dma_device_type": 2 00:10:21.409 } 00:10:21.409 ], 00:10:21.409 "driver_specific": { 00:10:21.409 "raid": { 00:10:21.409 "uuid": "c7f1de61-1c3d-4e0f-b29a-bde13de7f947", 00:10:21.409 "strip_size_kb": 64, 00:10:21.409 "state": "online", 00:10:21.409 "raid_level": "raid0", 00:10:21.409 "superblock": true, 00:10:21.409 "num_base_bdevs": 4, 00:10:21.409 "num_base_bdevs_discovered": 4, 00:10:21.409 "num_base_bdevs_operational": 4, 00:10:21.409 "base_bdevs_list": [ 00:10:21.409 { 00:10:21.409 "name": "NewBaseBdev", 00:10:21.409 "uuid": "1ae7e1d0-c8ab-4f28-b814-5533facf1d83", 00:10:21.409 "is_configured": true, 00:10:21.409 "data_offset": 2048, 00:10:21.409 "data_size": 63488 00:10:21.409 }, 00:10:21.409 { 00:10:21.409 "name": "BaseBdev2", 00:10:21.409 "uuid": "5dfe13b9-7193-44a4-bcb4-b866e28e37be", 00:10:21.409 "is_configured": true, 00:10:21.409 "data_offset": 2048, 00:10:21.409 "data_size": 63488 00:10:21.409 }, 00:10:21.409 { 00:10:21.409 "name": "BaseBdev3", 00:10:21.409 "uuid": "3685f884-f087-41ac-a209-da23692dac67", 00:10:21.409 "is_configured": true, 00:10:21.409 "data_offset": 2048, 00:10:21.409 "data_size": 63488 00:10:21.409 }, 00:10:21.409 { 00:10:21.409 "name": "BaseBdev4", 00:10:21.409 "uuid": "f66ae90b-f193-4174-a5ad-0b0f428c89c6", 00:10:21.409 "is_configured": true, 00:10:21.409 "data_offset": 2048, 00:10:21.409 "data_size": 63488 00:10:21.409 } 00:10:21.409 ] 00:10:21.409 } 00:10:21.409 } 00:10:21.409 }' 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:21.409 BaseBdev2 00:10:21.409 BaseBdev3 00:10:21.409 BaseBdev4' 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.409 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.669 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.669 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.669 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.669 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:21.669 23:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.669 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.669 23:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.669 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.669 [2024-12-06 23:44:33.089503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.670 [2024-12-06 23:44:33.089625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.670 [2024-12-06 23:44:33.089732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.670 [2024-12-06 23:44:33.089814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.670 [2024-12-06 23:44:33.089825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:21.670 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.670 23:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69963 00:10:21.670 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69963 ']' 00:10:21.670 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69963 00:10:21.670 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:21.670 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.670 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69963 00:10:21.670 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.670 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.670 killing process with pid 69963 00:10:21.670 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69963' 00:10:21.670 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69963 00:10:21.670 [2024-12-06 23:44:33.126696] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.670 23:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69963 00:10:22.239 [2024-12-06 23:44:33.555317] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.629 23:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:23.629 00:10:23.629 real 0m11.768s 00:10:23.629 user 0m18.481s 00:10:23.629 sys 0m2.127s 00:10:23.629 23:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.629 ************************************ 00:10:23.629 END TEST raid_state_function_test_sb 00:10:23.629 ************************************ 00:10:23.629 23:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.629 23:44:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:23.629 23:44:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:23.629 23:44:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.629 23:44:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.629 ************************************ 00:10:23.629 START TEST raid_superblock_test 00:10:23.629 ************************************ 00:10:23.629 23:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:23.629 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:23.629 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70638 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70638 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70638 ']' 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.630 23:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.630 [2024-12-06 23:44:34.958569] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:10:23.630 [2024-12-06 23:44:34.958805] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70638 ] 00:10:23.630 [2024-12-06 23:44:35.111892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.895 [2024-12-06 23:44:35.252146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.179 [2024-12-06 23:44:35.493137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.179 [2024-12-06 23:44:35.493313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.459 malloc1 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.459 [2024-12-06 23:44:35.857265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:24.459 [2024-12-06 23:44:35.857415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.459 [2024-12-06 23:44:35.857457] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:24.459 [2024-12-06 23:44:35.857486] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.459 [2024-12-06 23:44:35.859959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.459 [2024-12-06 23:44:35.860032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:24.459 pt1 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.459 malloc2 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.459 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.459 [2024-12-06 23:44:35.922366] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.459 [2024-12-06 23:44:35.922488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.459 [2024-12-06 23:44:35.922519] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:24.459 [2024-12-06 23:44:35.922528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.459 [2024-12-06 23:44:35.924854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.459 [2024-12-06 23:44:35.924886] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.459 pt2 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.460 malloc3 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.460 23:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.460 [2024-12-06 23:44:35.999347] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.460 [2024-12-06 23:44:35.999477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.460 [2024-12-06 23:44:35.999517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:24.460 [2024-12-06 23:44:35.999579] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.460 [2024-12-06 23:44:36.002006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.460 [2024-12-06 23:44:36.002077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.460 pt3 00:10:24.460 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.460 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.460 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.460 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:24.460 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:24.460 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:24.460 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.460 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.460 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.460 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:24.460 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.460 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.719 malloc4 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.719 [2024-12-06 23:44:36.068601] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:24.719 [2024-12-06 23:44:36.068768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.719 [2024-12-06 23:44:36.068819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:24.719 [2024-12-06 23:44:36.068859] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.719 [2024-12-06 23:44:36.071321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.719 [2024-12-06 23:44:36.071393] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:24.719 pt4 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.719 [2024-12-06 23:44:36.080611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.719 [2024-12-06 23:44:36.082752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.719 [2024-12-06 23:44:36.082897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.719 [2024-12-06 23:44:36.082974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:24.719 [2024-12-06 23:44:36.083203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:24.719 [2024-12-06 23:44:36.083250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:24.719 [2024-12-06 23:44:36.083547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:24.719 [2024-12-06 23:44:36.083787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:24.719 [2024-12-06 23:44:36.083836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:24.719 [2024-12-06 23:44:36.084032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.719 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.720 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.720 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.720 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.720 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.720 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.720 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.720 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.720 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.720 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.720 "name": "raid_bdev1", 00:10:24.720 "uuid": "089366b9-abae-4f4e-8667-8f37bd17a625", 00:10:24.720 "strip_size_kb": 64, 00:10:24.720 "state": "online", 00:10:24.720 "raid_level": "raid0", 00:10:24.720 "superblock": true, 00:10:24.720 "num_base_bdevs": 4, 00:10:24.720 "num_base_bdevs_discovered": 4, 00:10:24.720 "num_base_bdevs_operational": 4, 00:10:24.720 "base_bdevs_list": [ 00:10:24.720 { 00:10:24.720 "name": "pt1", 00:10:24.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.720 "is_configured": true, 00:10:24.720 "data_offset": 2048, 00:10:24.720 "data_size": 63488 00:10:24.720 }, 00:10:24.720 { 00:10:24.720 "name": "pt2", 00:10:24.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.720 "is_configured": true, 00:10:24.720 "data_offset": 2048, 00:10:24.720 "data_size": 63488 00:10:24.720 }, 00:10:24.720 { 00:10:24.720 "name": "pt3", 00:10:24.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.720 "is_configured": true, 00:10:24.720 "data_offset": 2048, 00:10:24.720 "data_size": 63488 00:10:24.720 }, 00:10:24.720 { 00:10:24.720 "name": "pt4", 00:10:24.720 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.720 "is_configured": true, 00:10:24.720 "data_offset": 2048, 00:10:24.720 "data_size": 63488 00:10:24.720 } 00:10:24.720 ] 00:10:24.720 }' 00:10:24.720 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.720 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.980 [2024-12-06 23:44:36.480260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.980 "name": "raid_bdev1", 00:10:24.980 "aliases": [ 00:10:24.980 "089366b9-abae-4f4e-8667-8f37bd17a625" 00:10:24.980 ], 00:10:24.980 "product_name": "Raid Volume", 00:10:24.980 "block_size": 512, 00:10:24.980 "num_blocks": 253952, 00:10:24.980 "uuid": "089366b9-abae-4f4e-8667-8f37bd17a625", 00:10:24.980 "assigned_rate_limits": { 00:10:24.980 "rw_ios_per_sec": 0, 00:10:24.980 "rw_mbytes_per_sec": 0, 00:10:24.980 "r_mbytes_per_sec": 0, 00:10:24.980 "w_mbytes_per_sec": 0 00:10:24.980 }, 00:10:24.980 "claimed": false, 00:10:24.980 "zoned": false, 00:10:24.980 "supported_io_types": { 00:10:24.980 "read": true, 00:10:24.980 "write": true, 00:10:24.980 "unmap": true, 00:10:24.980 "flush": true, 00:10:24.980 "reset": true, 00:10:24.980 "nvme_admin": false, 00:10:24.980 "nvme_io": false, 00:10:24.980 "nvme_io_md": false, 00:10:24.980 "write_zeroes": true, 00:10:24.980 "zcopy": false, 00:10:24.980 "get_zone_info": false, 00:10:24.980 "zone_management": false, 00:10:24.980 "zone_append": false, 00:10:24.980 "compare": false, 00:10:24.980 "compare_and_write": false, 00:10:24.980 "abort": false, 00:10:24.980 "seek_hole": false, 00:10:24.980 "seek_data": false, 00:10:24.980 "copy": false, 00:10:24.980 "nvme_iov_md": false 00:10:24.980 }, 00:10:24.980 "memory_domains": [ 00:10:24.980 { 00:10:24.980 "dma_device_id": "system", 00:10:24.980 "dma_device_type": 1 00:10:24.980 }, 00:10:24.980 { 00:10:24.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.980 "dma_device_type": 2 00:10:24.980 }, 00:10:24.980 { 00:10:24.980 "dma_device_id": "system", 00:10:24.980 "dma_device_type": 1 00:10:24.980 }, 00:10:24.980 { 00:10:24.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.980 "dma_device_type": 2 00:10:24.980 }, 00:10:24.980 { 00:10:24.980 "dma_device_id": "system", 00:10:24.980 "dma_device_type": 1 00:10:24.980 }, 00:10:24.980 { 00:10:24.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.980 "dma_device_type": 2 00:10:24.980 }, 00:10:24.980 { 00:10:24.980 "dma_device_id": "system", 00:10:24.980 "dma_device_type": 1 00:10:24.980 }, 00:10:24.980 { 00:10:24.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.980 "dma_device_type": 2 00:10:24.980 } 00:10:24.980 ], 00:10:24.980 "driver_specific": { 00:10:24.980 "raid": { 00:10:24.980 "uuid": "089366b9-abae-4f4e-8667-8f37bd17a625", 00:10:24.980 "strip_size_kb": 64, 00:10:24.980 "state": "online", 00:10:24.980 "raid_level": "raid0", 00:10:24.980 "superblock": true, 00:10:24.980 "num_base_bdevs": 4, 00:10:24.980 "num_base_bdevs_discovered": 4, 00:10:24.980 "num_base_bdevs_operational": 4, 00:10:24.980 "base_bdevs_list": [ 00:10:24.980 { 00:10:24.980 "name": "pt1", 00:10:24.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.980 "is_configured": true, 00:10:24.980 "data_offset": 2048, 00:10:24.980 "data_size": 63488 00:10:24.980 }, 00:10:24.980 { 00:10:24.980 "name": "pt2", 00:10:24.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.980 "is_configured": true, 00:10:24.980 "data_offset": 2048, 00:10:24.980 "data_size": 63488 00:10:24.980 }, 00:10:24.980 { 00:10:24.980 "name": "pt3", 00:10:24.980 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.980 "is_configured": true, 00:10:24.980 "data_offset": 2048, 00:10:24.980 "data_size": 63488 00:10:24.980 }, 00:10:24.980 { 00:10:24.980 "name": "pt4", 00:10:24.980 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.980 "is_configured": true, 00:10:24.980 "data_offset": 2048, 00:10:24.980 "data_size": 63488 00:10:24.980 } 00:10:24.980 ] 00:10:24.980 } 00:10:24.980 } 00:10:24.980 }' 00:10:24.980 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:25.241 pt2 00:10:25.241 pt3 00:10:25.241 pt4' 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.241 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.502 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.502 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.502 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:25.502 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.502 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.502 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.502 [2024-12-06 23:44:36.827617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.502 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=089366b9-abae-4f4e-8667-8f37bd17a625 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 089366b9-abae-4f4e-8667-8f37bd17a625 ']' 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.503 [2024-12-06 23:44:36.871252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.503 [2024-12-06 23:44:36.871281] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.503 [2024-12-06 23:44:36.871379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.503 [2024-12-06 23:44:36.871459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.503 [2024-12-06 23:44:36.871476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.503 23:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.503 [2024-12-06 23:44:37.031065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:25.503 [2024-12-06 23:44:37.033283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:25.503 [2024-12-06 23:44:37.033328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:25.503 [2024-12-06 23:44:37.033362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:25.503 [2024-12-06 23:44:37.033417] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:25.503 [2024-12-06 23:44:37.033472] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:25.503 [2024-12-06 23:44:37.033490] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:25.503 [2024-12-06 23:44:37.033508] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:25.503 [2024-12-06 23:44:37.033521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.503 [2024-12-06 23:44:37.033536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:25.503 request: 00:10:25.503 { 00:10:25.503 "name": "raid_bdev1", 00:10:25.503 "raid_level": "raid0", 00:10:25.503 "base_bdevs": [ 00:10:25.503 "malloc1", 00:10:25.503 "malloc2", 00:10:25.503 "malloc3", 00:10:25.503 "malloc4" 00:10:25.503 ], 00:10:25.503 "strip_size_kb": 64, 00:10:25.503 "superblock": false, 00:10:25.503 "method": "bdev_raid_create", 00:10:25.503 "req_id": 1 00:10:25.503 } 00:10:25.503 Got JSON-RPC error response 00:10:25.503 response: 00:10:25.503 { 00:10:25.503 "code": -17, 00:10:25.503 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:25.503 } 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.503 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.504 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.764 [2024-12-06 23:44:37.094866] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:25.764 [2024-12-06 23:44:37.095022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.764 [2024-12-06 23:44:37.095059] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:25.764 [2024-12-06 23:44:37.095093] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.764 [2024-12-06 23:44:37.097534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.764 [2024-12-06 23:44:37.097617] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:25.764 [2024-12-06 23:44:37.097731] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:25.764 [2024-12-06 23:44:37.097812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:25.764 pt1 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.764 "name": "raid_bdev1", 00:10:25.764 "uuid": "089366b9-abae-4f4e-8667-8f37bd17a625", 00:10:25.764 "strip_size_kb": 64, 00:10:25.764 "state": "configuring", 00:10:25.764 "raid_level": "raid0", 00:10:25.764 "superblock": true, 00:10:25.764 "num_base_bdevs": 4, 00:10:25.764 "num_base_bdevs_discovered": 1, 00:10:25.764 "num_base_bdevs_operational": 4, 00:10:25.764 "base_bdevs_list": [ 00:10:25.764 { 00:10:25.764 "name": "pt1", 00:10:25.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.764 "is_configured": true, 00:10:25.764 "data_offset": 2048, 00:10:25.764 "data_size": 63488 00:10:25.764 }, 00:10:25.764 { 00:10:25.764 "name": null, 00:10:25.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.764 "is_configured": false, 00:10:25.764 "data_offset": 2048, 00:10:25.764 "data_size": 63488 00:10:25.764 }, 00:10:25.764 { 00:10:25.764 "name": null, 00:10:25.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.764 "is_configured": false, 00:10:25.764 "data_offset": 2048, 00:10:25.764 "data_size": 63488 00:10:25.764 }, 00:10:25.764 { 00:10:25.764 "name": null, 00:10:25.764 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:25.764 "is_configured": false, 00:10:25.764 "data_offset": 2048, 00:10:25.764 "data_size": 63488 00:10:25.764 } 00:10:25.764 ] 00:10:25.764 }' 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.764 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.024 [2024-12-06 23:44:37.466284] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.024 [2024-12-06 23:44:37.466380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.024 [2024-12-06 23:44:37.466402] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:26.024 [2024-12-06 23:44:37.466414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.024 [2024-12-06 23:44:37.466955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.024 [2024-12-06 23:44:37.466978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.024 [2024-12-06 23:44:37.467084] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:26.024 [2024-12-06 23:44:37.467114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.024 pt2 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.024 [2024-12-06 23:44:37.474247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.024 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.024 "name": "raid_bdev1", 00:10:26.024 "uuid": "089366b9-abae-4f4e-8667-8f37bd17a625", 00:10:26.024 "strip_size_kb": 64, 00:10:26.024 "state": "configuring", 00:10:26.024 "raid_level": "raid0", 00:10:26.024 "superblock": true, 00:10:26.024 "num_base_bdevs": 4, 00:10:26.024 "num_base_bdevs_discovered": 1, 00:10:26.024 "num_base_bdevs_operational": 4, 00:10:26.024 "base_bdevs_list": [ 00:10:26.024 { 00:10:26.024 "name": "pt1", 00:10:26.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.024 "is_configured": true, 00:10:26.024 "data_offset": 2048, 00:10:26.024 "data_size": 63488 00:10:26.024 }, 00:10:26.024 { 00:10:26.024 "name": null, 00:10:26.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.024 "is_configured": false, 00:10:26.024 "data_offset": 0, 00:10:26.024 "data_size": 63488 00:10:26.024 }, 00:10:26.024 { 00:10:26.024 "name": null, 00:10:26.025 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.025 "is_configured": false, 00:10:26.025 "data_offset": 2048, 00:10:26.025 "data_size": 63488 00:10:26.025 }, 00:10:26.025 { 00:10:26.025 "name": null, 00:10:26.025 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:26.025 "is_configured": false, 00:10:26.025 "data_offset": 2048, 00:10:26.025 "data_size": 63488 00:10:26.025 } 00:10:26.025 ] 00:10:26.025 }' 00:10:26.025 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.025 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.594 [2024-12-06 23:44:37.873602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.594 [2024-12-06 23:44:37.873683] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.594 [2024-12-06 23:44:37.873705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:26.594 [2024-12-06 23:44:37.873714] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.594 [2024-12-06 23:44:37.874199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.594 [2024-12-06 23:44:37.874223] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.594 [2024-12-06 23:44:37.874326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:26.594 [2024-12-06 23:44:37.874352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.594 pt2 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.594 [2024-12-06 23:44:37.885536] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:26.594 [2024-12-06 23:44:37.885584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.594 [2024-12-06 23:44:37.885602] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:26.594 [2024-12-06 23:44:37.885609] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.594 [2024-12-06 23:44:37.885989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.594 [2024-12-06 23:44:37.886005] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:26.594 [2024-12-06 23:44:37.886065] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:26.594 [2024-12-06 23:44:37.886095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:26.594 pt3 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.594 [2024-12-06 23:44:37.897491] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:26.594 [2024-12-06 23:44:37.897530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.594 [2024-12-06 23:44:37.897544] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:26.594 [2024-12-06 23:44:37.897552] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.594 [2024-12-06 23:44:37.897917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.594 [2024-12-06 23:44:37.897933] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:26.594 [2024-12-06 23:44:37.897990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:26.594 [2024-12-06 23:44:37.898010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:26.594 [2024-12-06 23:44:37.898134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:26.594 [2024-12-06 23:44:37.898149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:26.594 [2024-12-06 23:44:37.898401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:26.594 [2024-12-06 23:44:37.898550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:26.594 [2024-12-06 23:44:37.898564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:26.594 [2024-12-06 23:44:37.898709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.594 pt4 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.594 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.595 "name": "raid_bdev1", 00:10:26.595 "uuid": "089366b9-abae-4f4e-8667-8f37bd17a625", 00:10:26.595 "strip_size_kb": 64, 00:10:26.595 "state": "online", 00:10:26.595 "raid_level": "raid0", 00:10:26.595 "superblock": true, 00:10:26.595 "num_base_bdevs": 4, 00:10:26.595 "num_base_bdevs_discovered": 4, 00:10:26.595 "num_base_bdevs_operational": 4, 00:10:26.595 "base_bdevs_list": [ 00:10:26.595 { 00:10:26.595 "name": "pt1", 00:10:26.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.595 "is_configured": true, 00:10:26.595 "data_offset": 2048, 00:10:26.595 "data_size": 63488 00:10:26.595 }, 00:10:26.595 { 00:10:26.595 "name": "pt2", 00:10:26.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.595 "is_configured": true, 00:10:26.595 "data_offset": 2048, 00:10:26.595 "data_size": 63488 00:10:26.595 }, 00:10:26.595 { 00:10:26.595 "name": "pt3", 00:10:26.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.595 "is_configured": true, 00:10:26.595 "data_offset": 2048, 00:10:26.595 "data_size": 63488 00:10:26.595 }, 00:10:26.595 { 00:10:26.595 "name": "pt4", 00:10:26.595 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:26.595 "is_configured": true, 00:10:26.595 "data_offset": 2048, 00:10:26.595 "data_size": 63488 00:10:26.595 } 00:10:26.595 ] 00:10:26.595 }' 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.595 23:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.854 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:26.854 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:26.854 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.854 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.854 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.854 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.854 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.854 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.854 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.854 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.854 [2024-12-06 23:44:38.293271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.854 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.854 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.854 "name": "raid_bdev1", 00:10:26.854 "aliases": [ 00:10:26.854 "089366b9-abae-4f4e-8667-8f37bd17a625" 00:10:26.854 ], 00:10:26.854 "product_name": "Raid Volume", 00:10:26.854 "block_size": 512, 00:10:26.854 "num_blocks": 253952, 00:10:26.854 "uuid": "089366b9-abae-4f4e-8667-8f37bd17a625", 00:10:26.854 "assigned_rate_limits": { 00:10:26.854 "rw_ios_per_sec": 0, 00:10:26.854 "rw_mbytes_per_sec": 0, 00:10:26.854 "r_mbytes_per_sec": 0, 00:10:26.854 "w_mbytes_per_sec": 0 00:10:26.854 }, 00:10:26.854 "claimed": false, 00:10:26.854 "zoned": false, 00:10:26.854 "supported_io_types": { 00:10:26.854 "read": true, 00:10:26.854 "write": true, 00:10:26.854 "unmap": true, 00:10:26.854 "flush": true, 00:10:26.854 "reset": true, 00:10:26.854 "nvme_admin": false, 00:10:26.854 "nvme_io": false, 00:10:26.854 "nvme_io_md": false, 00:10:26.854 "write_zeroes": true, 00:10:26.854 "zcopy": false, 00:10:26.854 "get_zone_info": false, 00:10:26.854 "zone_management": false, 00:10:26.854 "zone_append": false, 00:10:26.854 "compare": false, 00:10:26.854 "compare_and_write": false, 00:10:26.854 "abort": false, 00:10:26.854 "seek_hole": false, 00:10:26.855 "seek_data": false, 00:10:26.855 "copy": false, 00:10:26.855 "nvme_iov_md": false 00:10:26.855 }, 00:10:26.855 "memory_domains": [ 00:10:26.855 { 00:10:26.855 "dma_device_id": "system", 00:10:26.855 "dma_device_type": 1 00:10:26.855 }, 00:10:26.855 { 00:10:26.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.855 "dma_device_type": 2 00:10:26.855 }, 00:10:26.855 { 00:10:26.855 "dma_device_id": "system", 00:10:26.855 "dma_device_type": 1 00:10:26.855 }, 00:10:26.855 { 00:10:26.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.855 "dma_device_type": 2 00:10:26.855 }, 00:10:26.855 { 00:10:26.855 "dma_device_id": "system", 00:10:26.855 "dma_device_type": 1 00:10:26.855 }, 00:10:26.855 { 00:10:26.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.855 "dma_device_type": 2 00:10:26.855 }, 00:10:26.855 { 00:10:26.855 "dma_device_id": "system", 00:10:26.855 "dma_device_type": 1 00:10:26.855 }, 00:10:26.855 { 00:10:26.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.855 "dma_device_type": 2 00:10:26.855 } 00:10:26.855 ], 00:10:26.855 "driver_specific": { 00:10:26.855 "raid": { 00:10:26.855 "uuid": "089366b9-abae-4f4e-8667-8f37bd17a625", 00:10:26.855 "strip_size_kb": 64, 00:10:26.855 "state": "online", 00:10:26.855 "raid_level": "raid0", 00:10:26.855 "superblock": true, 00:10:26.855 "num_base_bdevs": 4, 00:10:26.855 "num_base_bdevs_discovered": 4, 00:10:26.855 "num_base_bdevs_operational": 4, 00:10:26.855 "base_bdevs_list": [ 00:10:26.855 { 00:10:26.855 "name": "pt1", 00:10:26.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.855 "is_configured": true, 00:10:26.855 "data_offset": 2048, 00:10:26.855 "data_size": 63488 00:10:26.855 }, 00:10:26.855 { 00:10:26.855 "name": "pt2", 00:10:26.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.855 "is_configured": true, 00:10:26.855 "data_offset": 2048, 00:10:26.855 "data_size": 63488 00:10:26.855 }, 00:10:26.855 { 00:10:26.855 "name": "pt3", 00:10:26.855 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.855 "is_configured": true, 00:10:26.855 "data_offset": 2048, 00:10:26.855 "data_size": 63488 00:10:26.855 }, 00:10:26.855 { 00:10:26.855 "name": "pt4", 00:10:26.855 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:26.855 "is_configured": true, 00:10:26.855 "data_offset": 2048, 00:10:26.855 "data_size": 63488 00:10:26.855 } 00:10:26.855 ] 00:10:26.855 } 00:10:26.855 } 00:10:26.855 }' 00:10:26.855 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.855 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:26.855 pt2 00:10:26.855 pt3 00:10:26.855 pt4' 00:10:26.855 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:27.115 [2024-12-06 23:44:38.624565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 089366b9-abae-4f4e-8667-8f37bd17a625 '!=' 089366b9-abae-4f4e-8667-8f37bd17a625 ']' 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70638 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70638 ']' 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70638 00:10:27.115 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:27.374 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.374 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70638 00:10:27.374 killing process with pid 70638 00:10:27.374 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.374 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.374 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70638' 00:10:27.374 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70638 00:10:27.374 [2024-12-06 23:44:38.713068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.374 [2024-12-06 23:44:38.713179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.374 23:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70638 00:10:27.375 [2024-12-06 23:44:38.713262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.375 [2024-12-06 23:44:38.713272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:27.634 [2024-12-06 23:44:39.139917] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.011 23:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:29.012 00:10:29.012 real 0m5.507s 00:10:29.012 user 0m7.611s 00:10:29.012 sys 0m1.022s 00:10:29.012 23:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.012 23:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.012 ************************************ 00:10:29.012 END TEST raid_superblock_test 00:10:29.012 ************************************ 00:10:29.012 23:44:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:29.012 23:44:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:29.012 23:44:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.012 23:44:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.012 ************************************ 00:10:29.012 START TEST raid_read_error_test 00:10:29.012 ************************************ 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xJstDWMwIP 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70897 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70897 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70897 ']' 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.012 23:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.012 [2024-12-06 23:44:40.553675] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:10:29.012 [2024-12-06 23:44:40.553867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70897 ] 00:10:29.271 [2024-12-06 23:44:40.728197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.531 [2024-12-06 23:44:40.867315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.531 [2024-12-06 23:44:41.092042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.789 [2024-12-06 23:44:41.092197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.049 BaseBdev1_malloc 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.049 true 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.049 [2024-12-06 23:44:41.423499] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:30.049 [2024-12-06 23:44:41.423570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.049 [2024-12-06 23:44:41.423590] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:30.049 [2024-12-06 23:44:41.423602] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.049 [2024-12-06 23:44:41.425944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.049 [2024-12-06 23:44:41.426069] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:30.049 BaseBdev1 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.049 BaseBdev2_malloc 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.049 true 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.049 [2024-12-06 23:44:41.495284] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:30.049 [2024-12-06 23:44:41.495343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.049 [2024-12-06 23:44:41.495359] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:30.049 [2024-12-06 23:44:41.495370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.049 [2024-12-06 23:44:41.497655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.049 [2024-12-06 23:44:41.497698] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:30.049 BaseBdev2 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.049 BaseBdev3_malloc 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.049 true 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.049 [2024-12-06 23:44:41.583354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:30.049 [2024-12-06 23:44:41.583416] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.049 [2024-12-06 23:44:41.583435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:30.049 [2024-12-06 23:44:41.583446] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.049 [2024-12-06 23:44:41.585976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.049 [2024-12-06 23:44:41.586017] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:30.049 BaseBdev3 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.049 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.307 BaseBdev4_malloc 00:10:30.307 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.307 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:30.307 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.307 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.307 true 00:10:30.307 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.307 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:30.307 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.307 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.307 [2024-12-06 23:44:41.657341] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:30.307 [2024-12-06 23:44:41.657406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.307 [2024-12-06 23:44:41.657441] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:30.307 [2024-12-06 23:44:41.657454] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.307 [2024-12-06 23:44:41.660093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.307 [2024-12-06 23:44:41.660192] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:30.307 BaseBdev4 00:10:30.307 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.308 [2024-12-06 23:44:41.669411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.308 [2024-12-06 23:44:41.671724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.308 [2024-12-06 23:44:41.671813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.308 [2024-12-06 23:44:41.671888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:30.308 [2024-12-06 23:44:41.672138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:30.308 [2024-12-06 23:44:41.672159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:30.308 [2024-12-06 23:44:41.672460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:30.308 [2024-12-06 23:44:41.672642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:30.308 [2024-12-06 23:44:41.672655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:30.308 [2024-12-06 23:44:41.672890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.308 "name": "raid_bdev1", 00:10:30.308 "uuid": "87fe56c4-f676-42a3-a2f5-b663c6ecc18d", 00:10:30.308 "strip_size_kb": 64, 00:10:30.308 "state": "online", 00:10:30.308 "raid_level": "raid0", 00:10:30.308 "superblock": true, 00:10:30.308 "num_base_bdevs": 4, 00:10:30.308 "num_base_bdevs_discovered": 4, 00:10:30.308 "num_base_bdevs_operational": 4, 00:10:30.308 "base_bdevs_list": [ 00:10:30.308 { 00:10:30.308 "name": "BaseBdev1", 00:10:30.308 "uuid": "b834f248-562e-52cf-874b-0d18a2689c02", 00:10:30.308 "is_configured": true, 00:10:30.308 "data_offset": 2048, 00:10:30.308 "data_size": 63488 00:10:30.308 }, 00:10:30.308 { 00:10:30.308 "name": "BaseBdev2", 00:10:30.308 "uuid": "07963dc7-fddb-5c7b-bcba-583f23143997", 00:10:30.308 "is_configured": true, 00:10:30.308 "data_offset": 2048, 00:10:30.308 "data_size": 63488 00:10:30.308 }, 00:10:30.308 { 00:10:30.308 "name": "BaseBdev3", 00:10:30.308 "uuid": "41cf6276-f794-517e-8440-a0fe4df8353e", 00:10:30.308 "is_configured": true, 00:10:30.308 "data_offset": 2048, 00:10:30.308 "data_size": 63488 00:10:30.308 }, 00:10:30.308 { 00:10:30.308 "name": "BaseBdev4", 00:10:30.308 "uuid": "6634b4a3-602a-5e29-809b-35f12b07317f", 00:10:30.308 "is_configured": true, 00:10:30.308 "data_offset": 2048, 00:10:30.308 "data_size": 63488 00:10:30.308 } 00:10:30.308 ] 00:10:30.308 }' 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.308 23:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.875 23:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:30.875 23:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:30.875 [2024-12-06 23:44:42.237880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.815 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.815 "name": "raid_bdev1", 00:10:31.815 "uuid": "87fe56c4-f676-42a3-a2f5-b663c6ecc18d", 00:10:31.815 "strip_size_kb": 64, 00:10:31.815 "state": "online", 00:10:31.815 "raid_level": "raid0", 00:10:31.815 "superblock": true, 00:10:31.815 "num_base_bdevs": 4, 00:10:31.815 "num_base_bdevs_discovered": 4, 00:10:31.815 "num_base_bdevs_operational": 4, 00:10:31.816 "base_bdevs_list": [ 00:10:31.816 { 00:10:31.816 "name": "BaseBdev1", 00:10:31.816 "uuid": "b834f248-562e-52cf-874b-0d18a2689c02", 00:10:31.816 "is_configured": true, 00:10:31.816 "data_offset": 2048, 00:10:31.816 "data_size": 63488 00:10:31.816 }, 00:10:31.816 { 00:10:31.816 "name": "BaseBdev2", 00:10:31.816 "uuid": "07963dc7-fddb-5c7b-bcba-583f23143997", 00:10:31.816 "is_configured": true, 00:10:31.816 "data_offset": 2048, 00:10:31.816 "data_size": 63488 00:10:31.816 }, 00:10:31.816 { 00:10:31.816 "name": "BaseBdev3", 00:10:31.816 "uuid": "41cf6276-f794-517e-8440-a0fe4df8353e", 00:10:31.816 "is_configured": true, 00:10:31.816 "data_offset": 2048, 00:10:31.816 "data_size": 63488 00:10:31.816 }, 00:10:31.816 { 00:10:31.816 "name": "BaseBdev4", 00:10:31.816 "uuid": "6634b4a3-602a-5e29-809b-35f12b07317f", 00:10:31.816 "is_configured": true, 00:10:31.816 "data_offset": 2048, 00:10:31.816 "data_size": 63488 00:10:31.816 } 00:10:31.816 ] 00:10:31.816 }' 00:10:31.816 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.816 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.075 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.075 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.075 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.075 [2024-12-06 23:44:43.602937] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.075 [2024-12-06 23:44:43.603078] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.075 [2024-12-06 23:44:43.605774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.075 [2024-12-06 23:44:43.605880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.075 [2024-12-06 23:44:43.605948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.075 [2024-12-06 23:44:43.606004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:32.075 { 00:10:32.075 "results": [ 00:10:32.075 { 00:10:32.075 "job": "raid_bdev1", 00:10:32.075 "core_mask": "0x1", 00:10:32.075 "workload": "randrw", 00:10:32.075 "percentage": 50, 00:10:32.075 "status": "finished", 00:10:32.075 "queue_depth": 1, 00:10:32.075 "io_size": 131072, 00:10:32.075 "runtime": 1.365766, 00:10:32.075 "iops": 13424.700863837583, 00:10:32.075 "mibps": 1678.0876079796979, 00:10:32.075 "io_failed": 1, 00:10:32.075 "io_timeout": 0, 00:10:32.075 "avg_latency_us": 104.8627723541919, 00:10:32.075 "min_latency_us": 26.382532751091702, 00:10:32.075 "max_latency_us": 1466.6899563318777 00:10:32.075 } 00:10:32.075 ], 00:10:32.075 "core_count": 1 00:10:32.075 } 00:10:32.075 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.075 23:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70897 00:10:32.075 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70897 ']' 00:10:32.075 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70897 00:10:32.075 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:32.075 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.075 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70897 00:10:32.334 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.334 killing process with pid 70897 00:10:32.334 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.334 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70897' 00:10:32.334 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70897 00:10:32.334 [2024-12-06 23:44:43.639095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.334 23:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70897 00:10:32.617 [2024-12-06 23:44:44.000740] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.124 23:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xJstDWMwIP 00:10:34.124 23:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:34.124 23:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:34.124 23:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:34.124 23:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:34.124 23:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.124 23:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:34.124 23:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:34.124 00:10:34.124 real 0m4.867s 00:10:34.124 user 0m5.607s 00:10:34.124 sys 0m0.682s 00:10:34.124 23:44:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.124 23:44:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.124 ************************************ 00:10:34.124 END TEST raid_read_error_test 00:10:34.124 ************************************ 00:10:34.124 23:44:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:34.124 23:44:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:34.124 23:44:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.124 23:44:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.124 ************************************ 00:10:34.124 START TEST raid_write_error_test 00:10:34.124 ************************************ 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TWsIuPRcMU 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71048 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71048 00:10:34.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71048 ']' 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.124 23:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.124 [2024-12-06 23:44:45.500791] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:10:34.124 [2024-12-06 23:44:45.500926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71048 ] 00:10:34.124 [2024-12-06 23:44:45.675768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.383 [2024-12-06 23:44:45.816578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.642 [2024-12-06 23:44:46.052271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.642 [2024-12-06 23:44:46.052316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.901 BaseBdev1_malloc 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.901 true 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.901 [2024-12-06 23:44:46.378464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:34.901 [2024-12-06 23:44:46.378607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.901 [2024-12-06 23:44:46.378632] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:34.901 [2024-12-06 23:44:46.378644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.901 [2024-12-06 23:44:46.381063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.901 [2024-12-06 23:44:46.381103] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:34.901 BaseBdev1 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.901 BaseBdev2_malloc 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.901 true 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.901 [2024-12-06 23:44:46.444110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:34.901 [2024-12-06 23:44:46.444173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.901 [2024-12-06 23:44:46.444190] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:34.901 [2024-12-06 23:44:46.444202] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.901 [2024-12-06 23:44:46.446588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.901 [2024-12-06 23:44:46.446626] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:34.901 BaseBdev2 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.901 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.159 BaseBdev3_malloc 00:10:35.159 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.159 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:35.159 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.159 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.159 true 00:10:35.159 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.159 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:35.159 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.159 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.159 [2024-12-06 23:44:46.518244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:35.159 [2024-12-06 23:44:46.518383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.159 [2024-12-06 23:44:46.518405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:35.159 [2024-12-06 23:44:46.518417] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.159 [2024-12-06 23:44:46.520831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.159 [2024-12-06 23:44:46.520869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:35.159 BaseBdev3 00:10:35.159 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.159 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.160 BaseBdev4_malloc 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.160 true 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.160 [2024-12-06 23:44:46.592983] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:35.160 [2024-12-06 23:44:46.593046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.160 [2024-12-06 23:44:46.593064] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:35.160 [2024-12-06 23:44:46.593075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.160 [2024-12-06 23:44:46.595475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.160 [2024-12-06 23:44:46.595516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:35.160 BaseBdev4 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.160 [2024-12-06 23:44:46.605037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.160 [2024-12-06 23:44:46.607230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.160 [2024-12-06 23:44:46.607309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.160 [2024-12-06 23:44:46.607374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:35.160 [2024-12-06 23:44:46.607609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:35.160 [2024-12-06 23:44:46.607628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:35.160 [2024-12-06 23:44:46.607911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:35.160 [2024-12-06 23:44:46.608090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:35.160 [2024-12-06 23:44:46.608110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:35.160 [2024-12-06 23:44:46.608269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.160 "name": "raid_bdev1", 00:10:35.160 "uuid": "78853f29-9adf-4571-8c47-80052f48870f", 00:10:35.160 "strip_size_kb": 64, 00:10:35.160 "state": "online", 00:10:35.160 "raid_level": "raid0", 00:10:35.160 "superblock": true, 00:10:35.160 "num_base_bdevs": 4, 00:10:35.160 "num_base_bdevs_discovered": 4, 00:10:35.160 "num_base_bdevs_operational": 4, 00:10:35.160 "base_bdevs_list": [ 00:10:35.160 { 00:10:35.160 "name": "BaseBdev1", 00:10:35.160 "uuid": "2c9e8793-a913-5378-9956-923a8de23315", 00:10:35.160 "is_configured": true, 00:10:35.160 "data_offset": 2048, 00:10:35.160 "data_size": 63488 00:10:35.160 }, 00:10:35.160 { 00:10:35.160 "name": "BaseBdev2", 00:10:35.160 "uuid": "be79a855-6dfe-5dd2-a77e-e35d09ad83f4", 00:10:35.160 "is_configured": true, 00:10:35.160 "data_offset": 2048, 00:10:35.160 "data_size": 63488 00:10:35.160 }, 00:10:35.160 { 00:10:35.160 "name": "BaseBdev3", 00:10:35.160 "uuid": "09bda34a-2467-5a6b-8e29-899bb7bd88be", 00:10:35.160 "is_configured": true, 00:10:35.160 "data_offset": 2048, 00:10:35.160 "data_size": 63488 00:10:35.160 }, 00:10:35.160 { 00:10:35.160 "name": "BaseBdev4", 00:10:35.160 "uuid": "a500e13d-daff-50c9-b91b-02130d18a1e1", 00:10:35.160 "is_configured": true, 00:10:35.160 "data_offset": 2048, 00:10:35.160 "data_size": 63488 00:10:35.160 } 00:10:35.160 ] 00:10:35.160 }' 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.160 23:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.419 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:35.419 23:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:35.678 [2024-12-06 23:44:47.061579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:36.617 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:36.617 23:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.617 23:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.617 23:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.617 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:36.617 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:36.617 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:36.617 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:36.617 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.617 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.617 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.617 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.618 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.618 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.618 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.618 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.618 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.618 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.618 23:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.618 23:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.618 23:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.618 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.618 23:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.618 "name": "raid_bdev1", 00:10:36.618 "uuid": "78853f29-9adf-4571-8c47-80052f48870f", 00:10:36.618 "strip_size_kb": 64, 00:10:36.618 "state": "online", 00:10:36.618 "raid_level": "raid0", 00:10:36.618 "superblock": true, 00:10:36.618 "num_base_bdevs": 4, 00:10:36.618 "num_base_bdevs_discovered": 4, 00:10:36.618 "num_base_bdevs_operational": 4, 00:10:36.618 "base_bdevs_list": [ 00:10:36.618 { 00:10:36.618 "name": "BaseBdev1", 00:10:36.618 "uuid": "2c9e8793-a913-5378-9956-923a8de23315", 00:10:36.618 "is_configured": true, 00:10:36.618 "data_offset": 2048, 00:10:36.618 "data_size": 63488 00:10:36.618 }, 00:10:36.618 { 00:10:36.618 "name": "BaseBdev2", 00:10:36.618 "uuid": "be79a855-6dfe-5dd2-a77e-e35d09ad83f4", 00:10:36.618 "is_configured": true, 00:10:36.618 "data_offset": 2048, 00:10:36.618 "data_size": 63488 00:10:36.618 }, 00:10:36.618 { 00:10:36.618 "name": "BaseBdev3", 00:10:36.618 "uuid": "09bda34a-2467-5a6b-8e29-899bb7bd88be", 00:10:36.618 "is_configured": true, 00:10:36.618 "data_offset": 2048, 00:10:36.618 "data_size": 63488 00:10:36.618 }, 00:10:36.618 { 00:10:36.618 "name": "BaseBdev4", 00:10:36.618 "uuid": "a500e13d-daff-50c9-b91b-02130d18a1e1", 00:10:36.618 "is_configured": true, 00:10:36.618 "data_offset": 2048, 00:10:36.618 "data_size": 63488 00:10:36.618 } 00:10:36.618 ] 00:10:36.618 }' 00:10:36.618 23:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.618 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.188 [2024-12-06 23:44:48.447011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.188 [2024-12-06 23:44:48.447149] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.188 [2024-12-06 23:44:48.449896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.188 [2024-12-06 23:44:48.450007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.188 [2024-12-06 23:44:48.450076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.188 [2024-12-06 23:44:48.450130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:37.188 { 00:10:37.188 "results": [ 00:10:37.188 { 00:10:37.188 "job": "raid_bdev1", 00:10:37.188 "core_mask": "0x1", 00:10:37.188 "workload": "randrw", 00:10:37.188 "percentage": 50, 00:10:37.188 "status": "finished", 00:10:37.188 "queue_depth": 1, 00:10:37.188 "io_size": 131072, 00:10:37.188 "runtime": 1.38626, 00:10:37.188 "iops": 13185.83815445876, 00:10:37.188 "mibps": 1648.229769307345, 00:10:37.188 "io_failed": 1, 00:10:37.188 "io_timeout": 0, 00:10:37.188 "avg_latency_us": 106.74578731617822, 00:10:37.188 "min_latency_us": 26.829694323144103, 00:10:37.188 "max_latency_us": 1402.2986899563318 00:10:37.188 } 00:10:37.188 ], 00:10:37.188 "core_count": 1 00:10:37.188 } 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71048 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71048 ']' 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71048 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71048 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.188 killing process with pid 71048 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71048' 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71048 00:10:37.188 [2024-12-06 23:44:48.488337] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.188 23:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71048 00:10:37.447 [2024-12-06 23:44:48.840547] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.828 23:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TWsIuPRcMU 00:10:38.828 23:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:38.828 23:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:38.828 23:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:38.828 23:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:38.828 23:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.828 23:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:38.828 ************************************ 00:10:38.828 END TEST raid_write_error_test 00:10:38.828 ************************************ 00:10:38.828 23:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:38.828 00:10:38.828 real 0m4.777s 00:10:38.828 user 0m5.430s 00:10:38.828 sys 0m0.687s 00:10:38.828 23:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.828 23:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.829 23:44:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:38.829 23:44:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:38.829 23:44:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:38.829 23:44:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.829 23:44:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.829 ************************************ 00:10:38.829 START TEST raid_state_function_test 00:10:38.829 ************************************ 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71192 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71192' 00:10:38.829 Process raid pid: 71192 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71192 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71192 ']' 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.829 23:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.829 [2024-12-06 23:44:50.335769] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:10:38.829 [2024-12-06 23:44:50.336003] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.088 [2024-12-06 23:44:50.512037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.346 [2024-12-06 23:44:50.652829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.346 [2024-12-06 23:44:50.893476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.346 [2024-12-06 23:44:50.893630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.605 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.605 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:39.605 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.605 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.605 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.605 [2024-12-06 23:44:51.159843] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.605 [2024-12-06 23:44:51.160007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.605 [2024-12-06 23:44:51.160038] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.605 [2024-12-06 23:44:51.160063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.605 [2024-12-06 23:44:51.160082] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.605 [2024-12-06 23:44:51.160105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.605 [2024-12-06 23:44:51.160124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:39.605 [2024-12-06 23:44:51.160145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:39.605 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.605 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.605 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.865 "name": "Existed_Raid", 00:10:39.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.865 "strip_size_kb": 64, 00:10:39.865 "state": "configuring", 00:10:39.865 "raid_level": "concat", 00:10:39.865 "superblock": false, 00:10:39.865 "num_base_bdevs": 4, 00:10:39.865 "num_base_bdevs_discovered": 0, 00:10:39.865 "num_base_bdevs_operational": 4, 00:10:39.865 "base_bdevs_list": [ 00:10:39.865 { 00:10:39.865 "name": "BaseBdev1", 00:10:39.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.865 "is_configured": false, 00:10:39.865 "data_offset": 0, 00:10:39.865 "data_size": 0 00:10:39.865 }, 00:10:39.865 { 00:10:39.865 "name": "BaseBdev2", 00:10:39.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.865 "is_configured": false, 00:10:39.865 "data_offset": 0, 00:10:39.865 "data_size": 0 00:10:39.865 }, 00:10:39.865 { 00:10:39.865 "name": "BaseBdev3", 00:10:39.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.865 "is_configured": false, 00:10:39.865 "data_offset": 0, 00:10:39.865 "data_size": 0 00:10:39.865 }, 00:10:39.865 { 00:10:39.865 "name": "BaseBdev4", 00:10:39.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.865 "is_configured": false, 00:10:39.865 "data_offset": 0, 00:10:39.865 "data_size": 0 00:10:39.865 } 00:10:39.865 ] 00:10:39.865 }' 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.865 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.125 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.125 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.125 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.125 [2024-12-06 23:44:51.639019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.125 [2024-12-06 23:44:51.639165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:40.125 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.125 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.125 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.125 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.125 [2024-12-06 23:44:51.650928] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.125 [2024-12-06 23:44:51.650976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.125 [2024-12-06 23:44:51.650987] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.125 [2024-12-06 23:44:51.650996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.125 [2024-12-06 23:44:51.651003] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.125 [2024-12-06 23:44:51.651013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.125 [2024-12-06 23:44:51.651019] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:40.125 [2024-12-06 23:44:51.651028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:40.125 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.125 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.125 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.125 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.384 [2024-12-06 23:44:51.706625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.384 BaseBdev1 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.384 [ 00:10:40.384 { 00:10:40.384 "name": "BaseBdev1", 00:10:40.384 "aliases": [ 00:10:40.384 "e86cfdfa-600b-443a-821c-7f0c02a50589" 00:10:40.384 ], 00:10:40.384 "product_name": "Malloc disk", 00:10:40.384 "block_size": 512, 00:10:40.384 "num_blocks": 65536, 00:10:40.384 "uuid": "e86cfdfa-600b-443a-821c-7f0c02a50589", 00:10:40.384 "assigned_rate_limits": { 00:10:40.384 "rw_ios_per_sec": 0, 00:10:40.384 "rw_mbytes_per_sec": 0, 00:10:40.384 "r_mbytes_per_sec": 0, 00:10:40.384 "w_mbytes_per_sec": 0 00:10:40.384 }, 00:10:40.384 "claimed": true, 00:10:40.384 "claim_type": "exclusive_write", 00:10:40.384 "zoned": false, 00:10:40.384 "supported_io_types": { 00:10:40.384 "read": true, 00:10:40.384 "write": true, 00:10:40.384 "unmap": true, 00:10:40.384 "flush": true, 00:10:40.384 "reset": true, 00:10:40.384 "nvme_admin": false, 00:10:40.384 "nvme_io": false, 00:10:40.384 "nvme_io_md": false, 00:10:40.384 "write_zeroes": true, 00:10:40.384 "zcopy": true, 00:10:40.384 "get_zone_info": false, 00:10:40.384 "zone_management": false, 00:10:40.384 "zone_append": false, 00:10:40.384 "compare": false, 00:10:40.384 "compare_and_write": false, 00:10:40.384 "abort": true, 00:10:40.384 "seek_hole": false, 00:10:40.384 "seek_data": false, 00:10:40.384 "copy": true, 00:10:40.384 "nvme_iov_md": false 00:10:40.384 }, 00:10:40.384 "memory_domains": [ 00:10:40.384 { 00:10:40.384 "dma_device_id": "system", 00:10:40.384 "dma_device_type": 1 00:10:40.384 }, 00:10:40.384 { 00:10:40.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.384 "dma_device_type": 2 00:10:40.384 } 00:10:40.384 ], 00:10:40.384 "driver_specific": {} 00:10:40.384 } 00:10:40.384 ] 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.384 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.384 "name": "Existed_Raid", 00:10:40.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.384 "strip_size_kb": 64, 00:10:40.384 "state": "configuring", 00:10:40.384 "raid_level": "concat", 00:10:40.384 "superblock": false, 00:10:40.384 "num_base_bdevs": 4, 00:10:40.384 "num_base_bdevs_discovered": 1, 00:10:40.384 "num_base_bdevs_operational": 4, 00:10:40.384 "base_bdevs_list": [ 00:10:40.384 { 00:10:40.384 "name": "BaseBdev1", 00:10:40.384 "uuid": "e86cfdfa-600b-443a-821c-7f0c02a50589", 00:10:40.384 "is_configured": true, 00:10:40.384 "data_offset": 0, 00:10:40.384 "data_size": 65536 00:10:40.384 }, 00:10:40.384 { 00:10:40.384 "name": "BaseBdev2", 00:10:40.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.384 "is_configured": false, 00:10:40.384 "data_offset": 0, 00:10:40.384 "data_size": 0 00:10:40.384 }, 00:10:40.384 { 00:10:40.384 "name": "BaseBdev3", 00:10:40.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.384 "is_configured": false, 00:10:40.384 "data_offset": 0, 00:10:40.384 "data_size": 0 00:10:40.384 }, 00:10:40.384 { 00:10:40.385 "name": "BaseBdev4", 00:10:40.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.385 "is_configured": false, 00:10:40.385 "data_offset": 0, 00:10:40.385 "data_size": 0 00:10:40.385 } 00:10:40.385 ] 00:10:40.385 }' 00:10:40.385 23:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.385 23:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.642 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.642 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.642 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.642 [2024-12-06 23:44:52.197875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.642 [2024-12-06 23:44:52.197952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:40.642 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.642 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.642 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.642 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.900 [2024-12-06 23:44:52.205914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.900 [2024-12-06 23:44:52.208351] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.900 [2024-12-06 23:44:52.208442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.900 [2024-12-06 23:44:52.208473] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.900 [2024-12-06 23:44:52.208500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.900 [2024-12-06 23:44:52.208519] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:40.900 [2024-12-06 23:44:52.208542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.900 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.900 "name": "Existed_Raid", 00:10:40.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.900 "strip_size_kb": 64, 00:10:40.900 "state": "configuring", 00:10:40.900 "raid_level": "concat", 00:10:40.900 "superblock": false, 00:10:40.900 "num_base_bdevs": 4, 00:10:40.900 "num_base_bdevs_discovered": 1, 00:10:40.900 "num_base_bdevs_operational": 4, 00:10:40.900 "base_bdevs_list": [ 00:10:40.900 { 00:10:40.900 "name": "BaseBdev1", 00:10:40.900 "uuid": "e86cfdfa-600b-443a-821c-7f0c02a50589", 00:10:40.900 "is_configured": true, 00:10:40.900 "data_offset": 0, 00:10:40.900 "data_size": 65536 00:10:40.900 }, 00:10:40.900 { 00:10:40.900 "name": "BaseBdev2", 00:10:40.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.900 "is_configured": false, 00:10:40.900 "data_offset": 0, 00:10:40.900 "data_size": 0 00:10:40.900 }, 00:10:40.900 { 00:10:40.900 "name": "BaseBdev3", 00:10:40.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.900 "is_configured": false, 00:10:40.900 "data_offset": 0, 00:10:40.900 "data_size": 0 00:10:40.901 }, 00:10:40.901 { 00:10:40.901 "name": "BaseBdev4", 00:10:40.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.901 "is_configured": false, 00:10:40.901 "data_offset": 0, 00:10:40.901 "data_size": 0 00:10:40.901 } 00:10:40.901 ] 00:10:40.901 }' 00:10:40.901 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.901 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.160 [2024-12-06 23:44:52.641749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.160 BaseBdev2 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.160 [ 00:10:41.160 { 00:10:41.160 "name": "BaseBdev2", 00:10:41.160 "aliases": [ 00:10:41.160 "184d321c-52bc-4cfb-8bef-534005aa04a7" 00:10:41.160 ], 00:10:41.160 "product_name": "Malloc disk", 00:10:41.160 "block_size": 512, 00:10:41.160 "num_blocks": 65536, 00:10:41.160 "uuid": "184d321c-52bc-4cfb-8bef-534005aa04a7", 00:10:41.160 "assigned_rate_limits": { 00:10:41.160 "rw_ios_per_sec": 0, 00:10:41.160 "rw_mbytes_per_sec": 0, 00:10:41.160 "r_mbytes_per_sec": 0, 00:10:41.160 "w_mbytes_per_sec": 0 00:10:41.160 }, 00:10:41.160 "claimed": true, 00:10:41.160 "claim_type": "exclusive_write", 00:10:41.160 "zoned": false, 00:10:41.160 "supported_io_types": { 00:10:41.160 "read": true, 00:10:41.160 "write": true, 00:10:41.160 "unmap": true, 00:10:41.160 "flush": true, 00:10:41.160 "reset": true, 00:10:41.160 "nvme_admin": false, 00:10:41.160 "nvme_io": false, 00:10:41.160 "nvme_io_md": false, 00:10:41.160 "write_zeroes": true, 00:10:41.160 "zcopy": true, 00:10:41.160 "get_zone_info": false, 00:10:41.160 "zone_management": false, 00:10:41.160 "zone_append": false, 00:10:41.160 "compare": false, 00:10:41.160 "compare_and_write": false, 00:10:41.160 "abort": true, 00:10:41.160 "seek_hole": false, 00:10:41.160 "seek_data": false, 00:10:41.160 "copy": true, 00:10:41.160 "nvme_iov_md": false 00:10:41.160 }, 00:10:41.160 "memory_domains": [ 00:10:41.160 { 00:10:41.160 "dma_device_id": "system", 00:10:41.160 "dma_device_type": 1 00:10:41.160 }, 00:10:41.160 { 00:10:41.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.160 "dma_device_type": 2 00:10:41.160 } 00:10:41.160 ], 00:10:41.160 "driver_specific": {} 00:10:41.160 } 00:10:41.160 ] 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.160 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.419 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.419 "name": "Existed_Raid", 00:10:41.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.419 "strip_size_kb": 64, 00:10:41.419 "state": "configuring", 00:10:41.419 "raid_level": "concat", 00:10:41.419 "superblock": false, 00:10:41.419 "num_base_bdevs": 4, 00:10:41.419 "num_base_bdevs_discovered": 2, 00:10:41.419 "num_base_bdevs_operational": 4, 00:10:41.419 "base_bdevs_list": [ 00:10:41.419 { 00:10:41.419 "name": "BaseBdev1", 00:10:41.419 "uuid": "e86cfdfa-600b-443a-821c-7f0c02a50589", 00:10:41.419 "is_configured": true, 00:10:41.419 "data_offset": 0, 00:10:41.419 "data_size": 65536 00:10:41.419 }, 00:10:41.419 { 00:10:41.419 "name": "BaseBdev2", 00:10:41.419 "uuid": "184d321c-52bc-4cfb-8bef-534005aa04a7", 00:10:41.419 "is_configured": true, 00:10:41.419 "data_offset": 0, 00:10:41.419 "data_size": 65536 00:10:41.419 }, 00:10:41.419 { 00:10:41.419 "name": "BaseBdev3", 00:10:41.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.419 "is_configured": false, 00:10:41.419 "data_offset": 0, 00:10:41.419 "data_size": 0 00:10:41.419 }, 00:10:41.419 { 00:10:41.419 "name": "BaseBdev4", 00:10:41.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.419 "is_configured": false, 00:10:41.419 "data_offset": 0, 00:10:41.419 "data_size": 0 00:10:41.419 } 00:10:41.419 ] 00:10:41.419 }' 00:10:41.419 23:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.419 23:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.679 [2024-12-06 23:44:53.163979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.679 BaseBdev3 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.679 [ 00:10:41.679 { 00:10:41.679 "name": "BaseBdev3", 00:10:41.679 "aliases": [ 00:10:41.679 "4cc6dd4b-335a-4ece-b243-3f94366e09ff" 00:10:41.679 ], 00:10:41.679 "product_name": "Malloc disk", 00:10:41.679 "block_size": 512, 00:10:41.679 "num_blocks": 65536, 00:10:41.679 "uuid": "4cc6dd4b-335a-4ece-b243-3f94366e09ff", 00:10:41.679 "assigned_rate_limits": { 00:10:41.679 "rw_ios_per_sec": 0, 00:10:41.679 "rw_mbytes_per_sec": 0, 00:10:41.679 "r_mbytes_per_sec": 0, 00:10:41.679 "w_mbytes_per_sec": 0 00:10:41.679 }, 00:10:41.679 "claimed": true, 00:10:41.679 "claim_type": "exclusive_write", 00:10:41.679 "zoned": false, 00:10:41.679 "supported_io_types": { 00:10:41.679 "read": true, 00:10:41.679 "write": true, 00:10:41.679 "unmap": true, 00:10:41.679 "flush": true, 00:10:41.679 "reset": true, 00:10:41.679 "nvme_admin": false, 00:10:41.679 "nvme_io": false, 00:10:41.679 "nvme_io_md": false, 00:10:41.679 "write_zeroes": true, 00:10:41.679 "zcopy": true, 00:10:41.679 "get_zone_info": false, 00:10:41.679 "zone_management": false, 00:10:41.679 "zone_append": false, 00:10:41.679 "compare": false, 00:10:41.679 "compare_and_write": false, 00:10:41.679 "abort": true, 00:10:41.679 "seek_hole": false, 00:10:41.679 "seek_data": false, 00:10:41.679 "copy": true, 00:10:41.679 "nvme_iov_md": false 00:10:41.679 }, 00:10:41.679 "memory_domains": [ 00:10:41.679 { 00:10:41.679 "dma_device_id": "system", 00:10:41.679 "dma_device_type": 1 00:10:41.679 }, 00:10:41.679 { 00:10:41.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.679 "dma_device_type": 2 00:10:41.679 } 00:10:41.679 ], 00:10:41.679 "driver_specific": {} 00:10:41.679 } 00:10:41.679 ] 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.679 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.680 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.680 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.680 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.680 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.680 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.680 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.680 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.940 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.940 "name": "Existed_Raid", 00:10:41.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.940 "strip_size_kb": 64, 00:10:41.940 "state": "configuring", 00:10:41.940 "raid_level": "concat", 00:10:41.940 "superblock": false, 00:10:41.940 "num_base_bdevs": 4, 00:10:41.940 "num_base_bdevs_discovered": 3, 00:10:41.940 "num_base_bdevs_operational": 4, 00:10:41.940 "base_bdevs_list": [ 00:10:41.940 { 00:10:41.940 "name": "BaseBdev1", 00:10:41.940 "uuid": "e86cfdfa-600b-443a-821c-7f0c02a50589", 00:10:41.940 "is_configured": true, 00:10:41.940 "data_offset": 0, 00:10:41.940 "data_size": 65536 00:10:41.940 }, 00:10:41.940 { 00:10:41.940 "name": "BaseBdev2", 00:10:41.940 "uuid": "184d321c-52bc-4cfb-8bef-534005aa04a7", 00:10:41.940 "is_configured": true, 00:10:41.940 "data_offset": 0, 00:10:41.940 "data_size": 65536 00:10:41.940 }, 00:10:41.940 { 00:10:41.940 "name": "BaseBdev3", 00:10:41.940 "uuid": "4cc6dd4b-335a-4ece-b243-3f94366e09ff", 00:10:41.940 "is_configured": true, 00:10:41.940 "data_offset": 0, 00:10:41.940 "data_size": 65536 00:10:41.940 }, 00:10:41.940 { 00:10:41.940 "name": "BaseBdev4", 00:10:41.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.940 "is_configured": false, 00:10:41.940 "data_offset": 0, 00:10:41.940 "data_size": 0 00:10:41.940 } 00:10:41.940 ] 00:10:41.940 }' 00:10:41.940 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.940 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.200 [2024-12-06 23:44:53.657953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:42.200 [2024-12-06 23:44:53.658091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:42.200 [2024-12-06 23:44:53.658116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:42.200 [2024-12-06 23:44:53.658440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:42.200 [2024-12-06 23:44:53.658654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:42.200 [2024-12-06 23:44:53.658707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:42.200 [2024-12-06 23:44:53.659040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.200 BaseBdev4 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.200 [ 00:10:42.200 { 00:10:42.200 "name": "BaseBdev4", 00:10:42.200 "aliases": [ 00:10:42.200 "a7df7954-4b0b-4ef2-b031-bc0fb1c5d658" 00:10:42.200 ], 00:10:42.200 "product_name": "Malloc disk", 00:10:42.200 "block_size": 512, 00:10:42.200 "num_blocks": 65536, 00:10:42.200 "uuid": "a7df7954-4b0b-4ef2-b031-bc0fb1c5d658", 00:10:42.200 "assigned_rate_limits": { 00:10:42.200 "rw_ios_per_sec": 0, 00:10:42.200 "rw_mbytes_per_sec": 0, 00:10:42.200 "r_mbytes_per_sec": 0, 00:10:42.200 "w_mbytes_per_sec": 0 00:10:42.200 }, 00:10:42.200 "claimed": true, 00:10:42.200 "claim_type": "exclusive_write", 00:10:42.200 "zoned": false, 00:10:42.200 "supported_io_types": { 00:10:42.200 "read": true, 00:10:42.200 "write": true, 00:10:42.200 "unmap": true, 00:10:42.200 "flush": true, 00:10:42.200 "reset": true, 00:10:42.200 "nvme_admin": false, 00:10:42.200 "nvme_io": false, 00:10:42.200 "nvme_io_md": false, 00:10:42.200 "write_zeroes": true, 00:10:42.200 "zcopy": true, 00:10:42.200 "get_zone_info": false, 00:10:42.200 "zone_management": false, 00:10:42.200 "zone_append": false, 00:10:42.200 "compare": false, 00:10:42.200 "compare_and_write": false, 00:10:42.200 "abort": true, 00:10:42.200 "seek_hole": false, 00:10:42.200 "seek_data": false, 00:10:42.200 "copy": true, 00:10:42.200 "nvme_iov_md": false 00:10:42.200 }, 00:10:42.200 "memory_domains": [ 00:10:42.200 { 00:10:42.200 "dma_device_id": "system", 00:10:42.200 "dma_device_type": 1 00:10:42.200 }, 00:10:42.200 { 00:10:42.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.200 "dma_device_type": 2 00:10:42.200 } 00:10:42.200 ], 00:10:42.200 "driver_specific": {} 00:10:42.200 } 00:10:42.200 ] 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.200 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.201 "name": "Existed_Raid", 00:10:42.201 "uuid": "6c7205de-1d9b-48a3-8882-0d60f2672400", 00:10:42.201 "strip_size_kb": 64, 00:10:42.201 "state": "online", 00:10:42.201 "raid_level": "concat", 00:10:42.201 "superblock": false, 00:10:42.201 "num_base_bdevs": 4, 00:10:42.201 "num_base_bdevs_discovered": 4, 00:10:42.201 "num_base_bdevs_operational": 4, 00:10:42.201 "base_bdevs_list": [ 00:10:42.201 { 00:10:42.201 "name": "BaseBdev1", 00:10:42.201 "uuid": "e86cfdfa-600b-443a-821c-7f0c02a50589", 00:10:42.201 "is_configured": true, 00:10:42.201 "data_offset": 0, 00:10:42.201 "data_size": 65536 00:10:42.201 }, 00:10:42.201 { 00:10:42.201 "name": "BaseBdev2", 00:10:42.201 "uuid": "184d321c-52bc-4cfb-8bef-534005aa04a7", 00:10:42.201 "is_configured": true, 00:10:42.201 "data_offset": 0, 00:10:42.201 "data_size": 65536 00:10:42.201 }, 00:10:42.201 { 00:10:42.201 "name": "BaseBdev3", 00:10:42.201 "uuid": "4cc6dd4b-335a-4ece-b243-3f94366e09ff", 00:10:42.201 "is_configured": true, 00:10:42.201 "data_offset": 0, 00:10:42.201 "data_size": 65536 00:10:42.201 }, 00:10:42.201 { 00:10:42.201 "name": "BaseBdev4", 00:10:42.201 "uuid": "a7df7954-4b0b-4ef2-b031-bc0fb1c5d658", 00:10:42.201 "is_configured": true, 00:10:42.201 "data_offset": 0, 00:10:42.201 "data_size": 65536 00:10:42.201 } 00:10:42.201 ] 00:10:42.201 }' 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.201 23:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.770 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.770 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:42.770 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.770 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.770 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.770 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.770 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:42.770 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.770 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.770 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.770 [2024-12-06 23:44:54.125652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.770 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.770 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.770 "name": "Existed_Raid", 00:10:42.770 "aliases": [ 00:10:42.770 "6c7205de-1d9b-48a3-8882-0d60f2672400" 00:10:42.770 ], 00:10:42.770 "product_name": "Raid Volume", 00:10:42.770 "block_size": 512, 00:10:42.770 "num_blocks": 262144, 00:10:42.770 "uuid": "6c7205de-1d9b-48a3-8882-0d60f2672400", 00:10:42.770 "assigned_rate_limits": { 00:10:42.770 "rw_ios_per_sec": 0, 00:10:42.770 "rw_mbytes_per_sec": 0, 00:10:42.770 "r_mbytes_per_sec": 0, 00:10:42.770 "w_mbytes_per_sec": 0 00:10:42.770 }, 00:10:42.770 "claimed": false, 00:10:42.770 "zoned": false, 00:10:42.770 "supported_io_types": { 00:10:42.770 "read": true, 00:10:42.770 "write": true, 00:10:42.770 "unmap": true, 00:10:42.770 "flush": true, 00:10:42.770 "reset": true, 00:10:42.770 "nvme_admin": false, 00:10:42.770 "nvme_io": false, 00:10:42.770 "nvme_io_md": false, 00:10:42.770 "write_zeroes": true, 00:10:42.770 "zcopy": false, 00:10:42.770 "get_zone_info": false, 00:10:42.770 "zone_management": false, 00:10:42.770 "zone_append": false, 00:10:42.770 "compare": false, 00:10:42.770 "compare_and_write": false, 00:10:42.770 "abort": false, 00:10:42.770 "seek_hole": false, 00:10:42.770 "seek_data": false, 00:10:42.770 "copy": false, 00:10:42.770 "nvme_iov_md": false 00:10:42.770 }, 00:10:42.770 "memory_domains": [ 00:10:42.770 { 00:10:42.770 "dma_device_id": "system", 00:10:42.770 "dma_device_type": 1 00:10:42.770 }, 00:10:42.770 { 00:10:42.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.770 "dma_device_type": 2 00:10:42.770 }, 00:10:42.770 { 00:10:42.770 "dma_device_id": "system", 00:10:42.770 "dma_device_type": 1 00:10:42.770 }, 00:10:42.770 { 00:10:42.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.770 "dma_device_type": 2 00:10:42.770 }, 00:10:42.770 { 00:10:42.770 "dma_device_id": "system", 00:10:42.770 "dma_device_type": 1 00:10:42.770 }, 00:10:42.770 { 00:10:42.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.771 "dma_device_type": 2 00:10:42.771 }, 00:10:42.771 { 00:10:42.771 "dma_device_id": "system", 00:10:42.771 "dma_device_type": 1 00:10:42.771 }, 00:10:42.771 { 00:10:42.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.771 "dma_device_type": 2 00:10:42.771 } 00:10:42.771 ], 00:10:42.771 "driver_specific": { 00:10:42.771 "raid": { 00:10:42.771 "uuid": "6c7205de-1d9b-48a3-8882-0d60f2672400", 00:10:42.771 "strip_size_kb": 64, 00:10:42.771 "state": "online", 00:10:42.771 "raid_level": "concat", 00:10:42.771 "superblock": false, 00:10:42.771 "num_base_bdevs": 4, 00:10:42.771 "num_base_bdevs_discovered": 4, 00:10:42.771 "num_base_bdevs_operational": 4, 00:10:42.771 "base_bdevs_list": [ 00:10:42.771 { 00:10:42.771 "name": "BaseBdev1", 00:10:42.771 "uuid": "e86cfdfa-600b-443a-821c-7f0c02a50589", 00:10:42.771 "is_configured": true, 00:10:42.771 "data_offset": 0, 00:10:42.771 "data_size": 65536 00:10:42.771 }, 00:10:42.771 { 00:10:42.771 "name": "BaseBdev2", 00:10:42.771 "uuid": "184d321c-52bc-4cfb-8bef-534005aa04a7", 00:10:42.771 "is_configured": true, 00:10:42.771 "data_offset": 0, 00:10:42.771 "data_size": 65536 00:10:42.771 }, 00:10:42.771 { 00:10:42.771 "name": "BaseBdev3", 00:10:42.771 "uuid": "4cc6dd4b-335a-4ece-b243-3f94366e09ff", 00:10:42.771 "is_configured": true, 00:10:42.771 "data_offset": 0, 00:10:42.771 "data_size": 65536 00:10:42.771 }, 00:10:42.771 { 00:10:42.771 "name": "BaseBdev4", 00:10:42.771 "uuid": "a7df7954-4b0b-4ef2-b031-bc0fb1c5d658", 00:10:42.771 "is_configured": true, 00:10:42.771 "data_offset": 0, 00:10:42.771 "data_size": 65536 00:10:42.771 } 00:10:42.771 ] 00:10:42.771 } 00:10:42.771 } 00:10:42.771 }' 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:42.771 BaseBdev2 00:10:42.771 BaseBdev3 00:10:42.771 BaseBdev4' 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.771 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.031 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.032 [2024-12-06 23:44:54.460689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.032 [2024-12-06 23:44:54.460734] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.032 [2024-12-06 23:44:54.460795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.032 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.292 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.292 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.292 "name": "Existed_Raid", 00:10:43.292 "uuid": "6c7205de-1d9b-48a3-8882-0d60f2672400", 00:10:43.292 "strip_size_kb": 64, 00:10:43.292 "state": "offline", 00:10:43.292 "raid_level": "concat", 00:10:43.292 "superblock": false, 00:10:43.292 "num_base_bdevs": 4, 00:10:43.292 "num_base_bdevs_discovered": 3, 00:10:43.292 "num_base_bdevs_operational": 3, 00:10:43.292 "base_bdevs_list": [ 00:10:43.292 { 00:10:43.292 "name": null, 00:10:43.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.292 "is_configured": false, 00:10:43.292 "data_offset": 0, 00:10:43.292 "data_size": 65536 00:10:43.292 }, 00:10:43.292 { 00:10:43.292 "name": "BaseBdev2", 00:10:43.292 "uuid": "184d321c-52bc-4cfb-8bef-534005aa04a7", 00:10:43.292 "is_configured": true, 00:10:43.292 "data_offset": 0, 00:10:43.292 "data_size": 65536 00:10:43.292 }, 00:10:43.292 { 00:10:43.292 "name": "BaseBdev3", 00:10:43.292 "uuid": "4cc6dd4b-335a-4ece-b243-3f94366e09ff", 00:10:43.292 "is_configured": true, 00:10:43.292 "data_offset": 0, 00:10:43.292 "data_size": 65536 00:10:43.292 }, 00:10:43.292 { 00:10:43.292 "name": "BaseBdev4", 00:10:43.292 "uuid": "a7df7954-4b0b-4ef2-b031-bc0fb1c5d658", 00:10:43.292 "is_configured": true, 00:10:43.292 "data_offset": 0, 00:10:43.292 "data_size": 65536 00:10:43.292 } 00:10:43.292 ] 00:10:43.292 }' 00:10:43.292 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.292 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.552 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:43.552 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.552 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.552 23:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.552 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.552 23:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.552 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.552 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.552 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.552 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:43.552 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.552 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.552 [2024-12-06 23:44:55.042098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.812 [2024-12-06 23:44:55.204653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.812 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.812 [2024-12-06 23:44:55.366883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:43.812 [2024-12-06 23:44:55.366969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:44.071 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.071 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.072 BaseBdev2 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.072 [ 00:10:44.072 { 00:10:44.072 "name": "BaseBdev2", 00:10:44.072 "aliases": [ 00:10:44.072 "3d62a913-ffb1-493e-b0df-1cf43ff3c42f" 00:10:44.072 ], 00:10:44.072 "product_name": "Malloc disk", 00:10:44.072 "block_size": 512, 00:10:44.072 "num_blocks": 65536, 00:10:44.072 "uuid": "3d62a913-ffb1-493e-b0df-1cf43ff3c42f", 00:10:44.072 "assigned_rate_limits": { 00:10:44.072 "rw_ios_per_sec": 0, 00:10:44.072 "rw_mbytes_per_sec": 0, 00:10:44.072 "r_mbytes_per_sec": 0, 00:10:44.072 "w_mbytes_per_sec": 0 00:10:44.072 }, 00:10:44.072 "claimed": false, 00:10:44.072 "zoned": false, 00:10:44.072 "supported_io_types": { 00:10:44.072 "read": true, 00:10:44.072 "write": true, 00:10:44.072 "unmap": true, 00:10:44.072 "flush": true, 00:10:44.072 "reset": true, 00:10:44.072 "nvme_admin": false, 00:10:44.072 "nvme_io": false, 00:10:44.072 "nvme_io_md": false, 00:10:44.072 "write_zeroes": true, 00:10:44.072 "zcopy": true, 00:10:44.072 "get_zone_info": false, 00:10:44.072 "zone_management": false, 00:10:44.072 "zone_append": false, 00:10:44.072 "compare": false, 00:10:44.072 "compare_and_write": false, 00:10:44.072 "abort": true, 00:10:44.072 "seek_hole": false, 00:10:44.072 "seek_data": false, 00:10:44.072 "copy": true, 00:10:44.072 "nvme_iov_md": false 00:10:44.072 }, 00:10:44.072 "memory_domains": [ 00:10:44.072 { 00:10:44.072 "dma_device_id": "system", 00:10:44.072 "dma_device_type": 1 00:10:44.072 }, 00:10:44.072 { 00:10:44.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.072 "dma_device_type": 2 00:10:44.072 } 00:10:44.072 ], 00:10:44.072 "driver_specific": {} 00:10:44.072 } 00:10:44.072 ] 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.072 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.333 BaseBdev3 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.333 [ 00:10:44.333 { 00:10:44.333 "name": "BaseBdev3", 00:10:44.333 "aliases": [ 00:10:44.333 "7faafc44-34ad-4bfe-90ba-82a427d81111" 00:10:44.333 ], 00:10:44.333 "product_name": "Malloc disk", 00:10:44.333 "block_size": 512, 00:10:44.333 "num_blocks": 65536, 00:10:44.333 "uuid": "7faafc44-34ad-4bfe-90ba-82a427d81111", 00:10:44.333 "assigned_rate_limits": { 00:10:44.333 "rw_ios_per_sec": 0, 00:10:44.333 "rw_mbytes_per_sec": 0, 00:10:44.333 "r_mbytes_per_sec": 0, 00:10:44.333 "w_mbytes_per_sec": 0 00:10:44.333 }, 00:10:44.333 "claimed": false, 00:10:44.333 "zoned": false, 00:10:44.333 "supported_io_types": { 00:10:44.333 "read": true, 00:10:44.333 "write": true, 00:10:44.333 "unmap": true, 00:10:44.333 "flush": true, 00:10:44.333 "reset": true, 00:10:44.333 "nvme_admin": false, 00:10:44.333 "nvme_io": false, 00:10:44.333 "nvme_io_md": false, 00:10:44.333 "write_zeroes": true, 00:10:44.333 "zcopy": true, 00:10:44.333 "get_zone_info": false, 00:10:44.333 "zone_management": false, 00:10:44.333 "zone_append": false, 00:10:44.333 "compare": false, 00:10:44.333 "compare_and_write": false, 00:10:44.333 "abort": true, 00:10:44.333 "seek_hole": false, 00:10:44.333 "seek_data": false, 00:10:44.333 "copy": true, 00:10:44.333 "nvme_iov_md": false 00:10:44.333 }, 00:10:44.333 "memory_domains": [ 00:10:44.333 { 00:10:44.333 "dma_device_id": "system", 00:10:44.333 "dma_device_type": 1 00:10:44.333 }, 00:10:44.333 { 00:10:44.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.333 "dma_device_type": 2 00:10:44.333 } 00:10:44.333 ], 00:10:44.333 "driver_specific": {} 00:10:44.333 } 00:10:44.333 ] 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.333 BaseBdev4 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:44.333 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.334 [ 00:10:44.334 { 00:10:44.334 "name": "BaseBdev4", 00:10:44.334 "aliases": [ 00:10:44.334 "49e07a04-c128-4e9f-b309-7f6e8596187b" 00:10:44.334 ], 00:10:44.334 "product_name": "Malloc disk", 00:10:44.334 "block_size": 512, 00:10:44.334 "num_blocks": 65536, 00:10:44.334 "uuid": "49e07a04-c128-4e9f-b309-7f6e8596187b", 00:10:44.334 "assigned_rate_limits": { 00:10:44.334 "rw_ios_per_sec": 0, 00:10:44.334 "rw_mbytes_per_sec": 0, 00:10:44.334 "r_mbytes_per_sec": 0, 00:10:44.334 "w_mbytes_per_sec": 0 00:10:44.334 }, 00:10:44.334 "claimed": false, 00:10:44.334 "zoned": false, 00:10:44.334 "supported_io_types": { 00:10:44.334 "read": true, 00:10:44.334 "write": true, 00:10:44.334 "unmap": true, 00:10:44.334 "flush": true, 00:10:44.334 "reset": true, 00:10:44.334 "nvme_admin": false, 00:10:44.334 "nvme_io": false, 00:10:44.334 "nvme_io_md": false, 00:10:44.334 "write_zeroes": true, 00:10:44.334 "zcopy": true, 00:10:44.334 "get_zone_info": false, 00:10:44.334 "zone_management": false, 00:10:44.334 "zone_append": false, 00:10:44.334 "compare": false, 00:10:44.334 "compare_and_write": false, 00:10:44.334 "abort": true, 00:10:44.334 "seek_hole": false, 00:10:44.334 "seek_data": false, 00:10:44.334 "copy": true, 00:10:44.334 "nvme_iov_md": false 00:10:44.334 }, 00:10:44.334 "memory_domains": [ 00:10:44.334 { 00:10:44.334 "dma_device_id": "system", 00:10:44.334 "dma_device_type": 1 00:10:44.334 }, 00:10:44.334 { 00:10:44.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.334 "dma_device_type": 2 00:10:44.334 } 00:10:44.334 ], 00:10:44.334 "driver_specific": {} 00:10:44.334 } 00:10:44.334 ] 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.334 [2024-12-06 23:44:55.760925] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.334 [2024-12-06 23:44:55.761052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.334 [2024-12-06 23:44:55.761095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.334 [2024-12-06 23:44:55.763230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.334 [2024-12-06 23:44:55.763322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.334 "name": "Existed_Raid", 00:10:44.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.334 "strip_size_kb": 64, 00:10:44.334 "state": "configuring", 00:10:44.334 "raid_level": "concat", 00:10:44.334 "superblock": false, 00:10:44.334 "num_base_bdevs": 4, 00:10:44.334 "num_base_bdevs_discovered": 3, 00:10:44.334 "num_base_bdevs_operational": 4, 00:10:44.334 "base_bdevs_list": [ 00:10:44.334 { 00:10:44.334 "name": "BaseBdev1", 00:10:44.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.334 "is_configured": false, 00:10:44.334 "data_offset": 0, 00:10:44.334 "data_size": 0 00:10:44.334 }, 00:10:44.334 { 00:10:44.334 "name": "BaseBdev2", 00:10:44.334 "uuid": "3d62a913-ffb1-493e-b0df-1cf43ff3c42f", 00:10:44.334 "is_configured": true, 00:10:44.334 "data_offset": 0, 00:10:44.334 "data_size": 65536 00:10:44.334 }, 00:10:44.334 { 00:10:44.334 "name": "BaseBdev3", 00:10:44.334 "uuid": "7faafc44-34ad-4bfe-90ba-82a427d81111", 00:10:44.334 "is_configured": true, 00:10:44.334 "data_offset": 0, 00:10:44.334 "data_size": 65536 00:10:44.334 }, 00:10:44.334 { 00:10:44.334 "name": "BaseBdev4", 00:10:44.334 "uuid": "49e07a04-c128-4e9f-b309-7f6e8596187b", 00:10:44.334 "is_configured": true, 00:10:44.334 "data_offset": 0, 00:10:44.334 "data_size": 65536 00:10:44.334 } 00:10:44.334 ] 00:10:44.334 }' 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.334 23:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.906 [2024-12-06 23:44:56.184247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.906 "name": "Existed_Raid", 00:10:44.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.906 "strip_size_kb": 64, 00:10:44.906 "state": "configuring", 00:10:44.906 "raid_level": "concat", 00:10:44.906 "superblock": false, 00:10:44.906 "num_base_bdevs": 4, 00:10:44.906 "num_base_bdevs_discovered": 2, 00:10:44.906 "num_base_bdevs_operational": 4, 00:10:44.906 "base_bdevs_list": [ 00:10:44.906 { 00:10:44.906 "name": "BaseBdev1", 00:10:44.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.906 "is_configured": false, 00:10:44.906 "data_offset": 0, 00:10:44.906 "data_size": 0 00:10:44.906 }, 00:10:44.906 { 00:10:44.906 "name": null, 00:10:44.906 "uuid": "3d62a913-ffb1-493e-b0df-1cf43ff3c42f", 00:10:44.906 "is_configured": false, 00:10:44.906 "data_offset": 0, 00:10:44.906 "data_size": 65536 00:10:44.906 }, 00:10:44.906 { 00:10:44.906 "name": "BaseBdev3", 00:10:44.906 "uuid": "7faafc44-34ad-4bfe-90ba-82a427d81111", 00:10:44.906 "is_configured": true, 00:10:44.906 "data_offset": 0, 00:10:44.906 "data_size": 65536 00:10:44.906 }, 00:10:44.906 { 00:10:44.906 "name": "BaseBdev4", 00:10:44.906 "uuid": "49e07a04-c128-4e9f-b309-7f6e8596187b", 00:10:44.906 "is_configured": true, 00:10:44.906 "data_offset": 0, 00:10:44.906 "data_size": 65536 00:10:44.906 } 00:10:44.906 ] 00:10:44.906 }' 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.906 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.166 [2024-12-06 23:44:56.710929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.166 BaseBdev1 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.166 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.426 [ 00:10:45.426 { 00:10:45.426 "name": "BaseBdev1", 00:10:45.426 "aliases": [ 00:10:45.426 "5be56a09-13fa-4b58-acab-af10a81cd36d" 00:10:45.426 ], 00:10:45.426 "product_name": "Malloc disk", 00:10:45.426 "block_size": 512, 00:10:45.426 "num_blocks": 65536, 00:10:45.426 "uuid": "5be56a09-13fa-4b58-acab-af10a81cd36d", 00:10:45.426 "assigned_rate_limits": { 00:10:45.426 "rw_ios_per_sec": 0, 00:10:45.426 "rw_mbytes_per_sec": 0, 00:10:45.426 "r_mbytes_per_sec": 0, 00:10:45.426 "w_mbytes_per_sec": 0 00:10:45.426 }, 00:10:45.426 "claimed": true, 00:10:45.426 "claim_type": "exclusive_write", 00:10:45.426 "zoned": false, 00:10:45.426 "supported_io_types": { 00:10:45.426 "read": true, 00:10:45.426 "write": true, 00:10:45.426 "unmap": true, 00:10:45.426 "flush": true, 00:10:45.426 "reset": true, 00:10:45.426 "nvme_admin": false, 00:10:45.426 "nvme_io": false, 00:10:45.426 "nvme_io_md": false, 00:10:45.426 "write_zeroes": true, 00:10:45.426 "zcopy": true, 00:10:45.426 "get_zone_info": false, 00:10:45.426 "zone_management": false, 00:10:45.426 "zone_append": false, 00:10:45.426 "compare": false, 00:10:45.426 "compare_and_write": false, 00:10:45.426 "abort": true, 00:10:45.426 "seek_hole": false, 00:10:45.426 "seek_data": false, 00:10:45.426 "copy": true, 00:10:45.426 "nvme_iov_md": false 00:10:45.426 }, 00:10:45.426 "memory_domains": [ 00:10:45.426 { 00:10:45.426 "dma_device_id": "system", 00:10:45.426 "dma_device_type": 1 00:10:45.426 }, 00:10:45.426 { 00:10:45.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.426 "dma_device_type": 2 00:10:45.426 } 00:10:45.426 ], 00:10:45.426 "driver_specific": {} 00:10:45.426 } 00:10:45.426 ] 00:10:45.426 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.426 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.426 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.426 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.427 "name": "Existed_Raid", 00:10:45.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.427 "strip_size_kb": 64, 00:10:45.427 "state": "configuring", 00:10:45.427 "raid_level": "concat", 00:10:45.427 "superblock": false, 00:10:45.427 "num_base_bdevs": 4, 00:10:45.427 "num_base_bdevs_discovered": 3, 00:10:45.427 "num_base_bdevs_operational": 4, 00:10:45.427 "base_bdevs_list": [ 00:10:45.427 { 00:10:45.427 "name": "BaseBdev1", 00:10:45.427 "uuid": "5be56a09-13fa-4b58-acab-af10a81cd36d", 00:10:45.427 "is_configured": true, 00:10:45.427 "data_offset": 0, 00:10:45.427 "data_size": 65536 00:10:45.427 }, 00:10:45.427 { 00:10:45.427 "name": null, 00:10:45.427 "uuid": "3d62a913-ffb1-493e-b0df-1cf43ff3c42f", 00:10:45.427 "is_configured": false, 00:10:45.427 "data_offset": 0, 00:10:45.427 "data_size": 65536 00:10:45.427 }, 00:10:45.427 { 00:10:45.427 "name": "BaseBdev3", 00:10:45.427 "uuid": "7faafc44-34ad-4bfe-90ba-82a427d81111", 00:10:45.427 "is_configured": true, 00:10:45.427 "data_offset": 0, 00:10:45.427 "data_size": 65536 00:10:45.427 }, 00:10:45.427 { 00:10:45.427 "name": "BaseBdev4", 00:10:45.427 "uuid": "49e07a04-c128-4e9f-b309-7f6e8596187b", 00:10:45.427 "is_configured": true, 00:10:45.427 "data_offset": 0, 00:10:45.427 "data_size": 65536 00:10:45.427 } 00:10:45.427 ] 00:10:45.427 }' 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.427 23:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.687 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.687 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.687 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.687 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.687 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.947 [2024-12-06 23:44:57.270065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.947 "name": "Existed_Raid", 00:10:45.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.947 "strip_size_kb": 64, 00:10:45.947 "state": "configuring", 00:10:45.947 "raid_level": "concat", 00:10:45.947 "superblock": false, 00:10:45.947 "num_base_bdevs": 4, 00:10:45.947 "num_base_bdevs_discovered": 2, 00:10:45.947 "num_base_bdevs_operational": 4, 00:10:45.947 "base_bdevs_list": [ 00:10:45.947 { 00:10:45.947 "name": "BaseBdev1", 00:10:45.947 "uuid": "5be56a09-13fa-4b58-acab-af10a81cd36d", 00:10:45.947 "is_configured": true, 00:10:45.947 "data_offset": 0, 00:10:45.947 "data_size": 65536 00:10:45.947 }, 00:10:45.947 { 00:10:45.947 "name": null, 00:10:45.947 "uuid": "3d62a913-ffb1-493e-b0df-1cf43ff3c42f", 00:10:45.947 "is_configured": false, 00:10:45.947 "data_offset": 0, 00:10:45.947 "data_size": 65536 00:10:45.947 }, 00:10:45.947 { 00:10:45.947 "name": null, 00:10:45.947 "uuid": "7faafc44-34ad-4bfe-90ba-82a427d81111", 00:10:45.947 "is_configured": false, 00:10:45.947 "data_offset": 0, 00:10:45.947 "data_size": 65536 00:10:45.947 }, 00:10:45.947 { 00:10:45.947 "name": "BaseBdev4", 00:10:45.947 "uuid": "49e07a04-c128-4e9f-b309-7f6e8596187b", 00:10:45.947 "is_configured": true, 00:10:45.947 "data_offset": 0, 00:10:45.947 "data_size": 65536 00:10:45.947 } 00:10:45.947 ] 00:10:45.947 }' 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.947 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.207 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.207 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.207 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.207 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.207 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.466 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:46.466 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:46.466 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.466 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.466 [2024-12-06 23:44:57.785186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.466 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.466 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.466 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.467 "name": "Existed_Raid", 00:10:46.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.467 "strip_size_kb": 64, 00:10:46.467 "state": "configuring", 00:10:46.467 "raid_level": "concat", 00:10:46.467 "superblock": false, 00:10:46.467 "num_base_bdevs": 4, 00:10:46.467 "num_base_bdevs_discovered": 3, 00:10:46.467 "num_base_bdevs_operational": 4, 00:10:46.467 "base_bdevs_list": [ 00:10:46.467 { 00:10:46.467 "name": "BaseBdev1", 00:10:46.467 "uuid": "5be56a09-13fa-4b58-acab-af10a81cd36d", 00:10:46.467 "is_configured": true, 00:10:46.467 "data_offset": 0, 00:10:46.467 "data_size": 65536 00:10:46.467 }, 00:10:46.467 { 00:10:46.467 "name": null, 00:10:46.467 "uuid": "3d62a913-ffb1-493e-b0df-1cf43ff3c42f", 00:10:46.467 "is_configured": false, 00:10:46.467 "data_offset": 0, 00:10:46.467 "data_size": 65536 00:10:46.467 }, 00:10:46.467 { 00:10:46.467 "name": "BaseBdev3", 00:10:46.467 "uuid": "7faafc44-34ad-4bfe-90ba-82a427d81111", 00:10:46.467 "is_configured": true, 00:10:46.467 "data_offset": 0, 00:10:46.467 "data_size": 65536 00:10:46.467 }, 00:10:46.467 { 00:10:46.467 "name": "BaseBdev4", 00:10:46.467 "uuid": "49e07a04-c128-4e9f-b309-7f6e8596187b", 00:10:46.467 "is_configured": true, 00:10:46.467 "data_offset": 0, 00:10:46.467 "data_size": 65536 00:10:46.467 } 00:10:46.467 ] 00:10:46.467 }' 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.467 23:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.726 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.726 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.726 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.726 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.726 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.726 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:46.726 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:46.726 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.726 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.986 [2024-12-06 23:44:58.292362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.986 "name": "Existed_Raid", 00:10:46.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.986 "strip_size_kb": 64, 00:10:46.986 "state": "configuring", 00:10:46.986 "raid_level": "concat", 00:10:46.986 "superblock": false, 00:10:46.986 "num_base_bdevs": 4, 00:10:46.986 "num_base_bdevs_discovered": 2, 00:10:46.986 "num_base_bdevs_operational": 4, 00:10:46.986 "base_bdevs_list": [ 00:10:46.986 { 00:10:46.986 "name": null, 00:10:46.986 "uuid": "5be56a09-13fa-4b58-acab-af10a81cd36d", 00:10:46.986 "is_configured": false, 00:10:46.986 "data_offset": 0, 00:10:46.986 "data_size": 65536 00:10:46.986 }, 00:10:46.986 { 00:10:46.986 "name": null, 00:10:46.986 "uuid": "3d62a913-ffb1-493e-b0df-1cf43ff3c42f", 00:10:46.986 "is_configured": false, 00:10:46.986 "data_offset": 0, 00:10:46.986 "data_size": 65536 00:10:46.986 }, 00:10:46.986 { 00:10:46.986 "name": "BaseBdev3", 00:10:46.986 "uuid": "7faafc44-34ad-4bfe-90ba-82a427d81111", 00:10:46.986 "is_configured": true, 00:10:46.986 "data_offset": 0, 00:10:46.986 "data_size": 65536 00:10:46.986 }, 00:10:46.986 { 00:10:46.986 "name": "BaseBdev4", 00:10:46.986 "uuid": "49e07a04-c128-4e9f-b309-7f6e8596187b", 00:10:46.986 "is_configured": true, 00:10:46.986 "data_offset": 0, 00:10:46.986 "data_size": 65536 00:10:46.986 } 00:10:46.986 ] 00:10:46.986 }' 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.986 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.297 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.297 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.297 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.297 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:47.297 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.557 [2024-12-06 23:44:58.883103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.557 "name": "Existed_Raid", 00:10:47.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.557 "strip_size_kb": 64, 00:10:47.557 "state": "configuring", 00:10:47.557 "raid_level": "concat", 00:10:47.557 "superblock": false, 00:10:47.557 "num_base_bdevs": 4, 00:10:47.557 "num_base_bdevs_discovered": 3, 00:10:47.557 "num_base_bdevs_operational": 4, 00:10:47.557 "base_bdevs_list": [ 00:10:47.557 { 00:10:47.557 "name": null, 00:10:47.557 "uuid": "5be56a09-13fa-4b58-acab-af10a81cd36d", 00:10:47.557 "is_configured": false, 00:10:47.557 "data_offset": 0, 00:10:47.557 "data_size": 65536 00:10:47.557 }, 00:10:47.557 { 00:10:47.557 "name": "BaseBdev2", 00:10:47.557 "uuid": "3d62a913-ffb1-493e-b0df-1cf43ff3c42f", 00:10:47.557 "is_configured": true, 00:10:47.557 "data_offset": 0, 00:10:47.557 "data_size": 65536 00:10:47.557 }, 00:10:47.557 { 00:10:47.557 "name": "BaseBdev3", 00:10:47.557 "uuid": "7faafc44-34ad-4bfe-90ba-82a427d81111", 00:10:47.557 "is_configured": true, 00:10:47.557 "data_offset": 0, 00:10:47.557 "data_size": 65536 00:10:47.557 }, 00:10:47.557 { 00:10:47.557 "name": "BaseBdev4", 00:10:47.557 "uuid": "49e07a04-c128-4e9f-b309-7f6e8596187b", 00:10:47.557 "is_configured": true, 00:10:47.557 "data_offset": 0, 00:10:47.557 "data_size": 65536 00:10:47.557 } 00:10:47.557 ] 00:10:47.557 }' 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.557 23:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.817 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:47.817 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.817 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.817 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.817 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.817 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:47.817 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.817 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.817 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.817 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:47.817 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5be56a09-13fa-4b58-acab-af10a81cd36d 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.083 [2024-12-06 23:44:59.436008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:48.083 [2024-12-06 23:44:59.436149] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:48.083 [2024-12-06 23:44:59.436173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:48.083 [2024-12-06 23:44:59.436511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:48.083 [2024-12-06 23:44:59.436722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:48.083 [2024-12-06 23:44:59.436763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:48.083 [2024-12-06 23:44:59.437067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.083 NewBaseBdev 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.083 [ 00:10:48.083 { 00:10:48.083 "name": "NewBaseBdev", 00:10:48.083 "aliases": [ 00:10:48.083 "5be56a09-13fa-4b58-acab-af10a81cd36d" 00:10:48.083 ], 00:10:48.083 "product_name": "Malloc disk", 00:10:48.083 "block_size": 512, 00:10:48.083 "num_blocks": 65536, 00:10:48.083 "uuid": "5be56a09-13fa-4b58-acab-af10a81cd36d", 00:10:48.083 "assigned_rate_limits": { 00:10:48.083 "rw_ios_per_sec": 0, 00:10:48.083 "rw_mbytes_per_sec": 0, 00:10:48.083 "r_mbytes_per_sec": 0, 00:10:48.083 "w_mbytes_per_sec": 0 00:10:48.083 }, 00:10:48.083 "claimed": true, 00:10:48.083 "claim_type": "exclusive_write", 00:10:48.083 "zoned": false, 00:10:48.083 "supported_io_types": { 00:10:48.083 "read": true, 00:10:48.083 "write": true, 00:10:48.083 "unmap": true, 00:10:48.083 "flush": true, 00:10:48.083 "reset": true, 00:10:48.083 "nvme_admin": false, 00:10:48.083 "nvme_io": false, 00:10:48.083 "nvme_io_md": false, 00:10:48.083 "write_zeroes": true, 00:10:48.083 "zcopy": true, 00:10:48.083 "get_zone_info": false, 00:10:48.083 "zone_management": false, 00:10:48.083 "zone_append": false, 00:10:48.083 "compare": false, 00:10:48.083 "compare_and_write": false, 00:10:48.083 "abort": true, 00:10:48.083 "seek_hole": false, 00:10:48.083 "seek_data": false, 00:10:48.083 "copy": true, 00:10:48.083 "nvme_iov_md": false 00:10:48.083 }, 00:10:48.083 "memory_domains": [ 00:10:48.083 { 00:10:48.083 "dma_device_id": "system", 00:10:48.083 "dma_device_type": 1 00:10:48.083 }, 00:10:48.083 { 00:10:48.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.083 "dma_device_type": 2 00:10:48.083 } 00:10:48.083 ], 00:10:48.083 "driver_specific": {} 00:10:48.083 } 00:10:48.083 ] 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.083 "name": "Existed_Raid", 00:10:48.083 "uuid": "4dc68d03-57aa-4d9e-ae13-8251e1bc8c7a", 00:10:48.083 "strip_size_kb": 64, 00:10:48.083 "state": "online", 00:10:48.083 "raid_level": "concat", 00:10:48.083 "superblock": false, 00:10:48.083 "num_base_bdevs": 4, 00:10:48.083 "num_base_bdevs_discovered": 4, 00:10:48.083 "num_base_bdevs_operational": 4, 00:10:48.083 "base_bdevs_list": [ 00:10:48.083 { 00:10:48.083 "name": "NewBaseBdev", 00:10:48.083 "uuid": "5be56a09-13fa-4b58-acab-af10a81cd36d", 00:10:48.083 "is_configured": true, 00:10:48.083 "data_offset": 0, 00:10:48.083 "data_size": 65536 00:10:48.083 }, 00:10:48.083 { 00:10:48.083 "name": "BaseBdev2", 00:10:48.083 "uuid": "3d62a913-ffb1-493e-b0df-1cf43ff3c42f", 00:10:48.083 "is_configured": true, 00:10:48.083 "data_offset": 0, 00:10:48.083 "data_size": 65536 00:10:48.083 }, 00:10:48.083 { 00:10:48.083 "name": "BaseBdev3", 00:10:48.083 "uuid": "7faafc44-34ad-4bfe-90ba-82a427d81111", 00:10:48.083 "is_configured": true, 00:10:48.083 "data_offset": 0, 00:10:48.083 "data_size": 65536 00:10:48.083 }, 00:10:48.083 { 00:10:48.083 "name": "BaseBdev4", 00:10:48.083 "uuid": "49e07a04-c128-4e9f-b309-7f6e8596187b", 00:10:48.083 "is_configured": true, 00:10:48.083 "data_offset": 0, 00:10:48.083 "data_size": 65536 00:10:48.083 } 00:10:48.083 ] 00:10:48.083 }' 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.083 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.689 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:48.689 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:48.689 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.689 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.689 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.689 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.689 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:48.689 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.689 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.689 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.689 [2024-12-06 23:44:59.943611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.689 23:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.689 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.689 "name": "Existed_Raid", 00:10:48.689 "aliases": [ 00:10:48.689 "4dc68d03-57aa-4d9e-ae13-8251e1bc8c7a" 00:10:48.689 ], 00:10:48.689 "product_name": "Raid Volume", 00:10:48.689 "block_size": 512, 00:10:48.689 "num_blocks": 262144, 00:10:48.689 "uuid": "4dc68d03-57aa-4d9e-ae13-8251e1bc8c7a", 00:10:48.689 "assigned_rate_limits": { 00:10:48.689 "rw_ios_per_sec": 0, 00:10:48.689 "rw_mbytes_per_sec": 0, 00:10:48.689 "r_mbytes_per_sec": 0, 00:10:48.689 "w_mbytes_per_sec": 0 00:10:48.689 }, 00:10:48.689 "claimed": false, 00:10:48.689 "zoned": false, 00:10:48.689 "supported_io_types": { 00:10:48.689 "read": true, 00:10:48.689 "write": true, 00:10:48.689 "unmap": true, 00:10:48.689 "flush": true, 00:10:48.689 "reset": true, 00:10:48.689 "nvme_admin": false, 00:10:48.689 "nvme_io": false, 00:10:48.689 "nvme_io_md": false, 00:10:48.689 "write_zeroes": true, 00:10:48.689 "zcopy": false, 00:10:48.689 "get_zone_info": false, 00:10:48.689 "zone_management": false, 00:10:48.689 "zone_append": false, 00:10:48.689 "compare": false, 00:10:48.689 "compare_and_write": false, 00:10:48.689 "abort": false, 00:10:48.689 "seek_hole": false, 00:10:48.689 "seek_data": false, 00:10:48.689 "copy": false, 00:10:48.689 "nvme_iov_md": false 00:10:48.689 }, 00:10:48.689 "memory_domains": [ 00:10:48.689 { 00:10:48.689 "dma_device_id": "system", 00:10:48.689 "dma_device_type": 1 00:10:48.689 }, 00:10:48.689 { 00:10:48.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.689 "dma_device_type": 2 00:10:48.689 }, 00:10:48.689 { 00:10:48.689 "dma_device_id": "system", 00:10:48.689 "dma_device_type": 1 00:10:48.689 }, 00:10:48.689 { 00:10:48.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.689 "dma_device_type": 2 00:10:48.689 }, 00:10:48.689 { 00:10:48.689 "dma_device_id": "system", 00:10:48.689 "dma_device_type": 1 00:10:48.689 }, 00:10:48.689 { 00:10:48.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.689 "dma_device_type": 2 00:10:48.689 }, 00:10:48.689 { 00:10:48.689 "dma_device_id": "system", 00:10:48.689 "dma_device_type": 1 00:10:48.689 }, 00:10:48.689 { 00:10:48.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.689 "dma_device_type": 2 00:10:48.689 } 00:10:48.689 ], 00:10:48.689 "driver_specific": { 00:10:48.689 "raid": { 00:10:48.689 "uuid": "4dc68d03-57aa-4d9e-ae13-8251e1bc8c7a", 00:10:48.689 "strip_size_kb": 64, 00:10:48.689 "state": "online", 00:10:48.689 "raid_level": "concat", 00:10:48.689 "superblock": false, 00:10:48.689 "num_base_bdevs": 4, 00:10:48.689 "num_base_bdevs_discovered": 4, 00:10:48.689 "num_base_bdevs_operational": 4, 00:10:48.689 "base_bdevs_list": [ 00:10:48.689 { 00:10:48.689 "name": "NewBaseBdev", 00:10:48.689 "uuid": "5be56a09-13fa-4b58-acab-af10a81cd36d", 00:10:48.689 "is_configured": true, 00:10:48.689 "data_offset": 0, 00:10:48.689 "data_size": 65536 00:10:48.689 }, 00:10:48.689 { 00:10:48.689 "name": "BaseBdev2", 00:10:48.689 "uuid": "3d62a913-ffb1-493e-b0df-1cf43ff3c42f", 00:10:48.690 "is_configured": true, 00:10:48.690 "data_offset": 0, 00:10:48.690 "data_size": 65536 00:10:48.690 }, 00:10:48.690 { 00:10:48.690 "name": "BaseBdev3", 00:10:48.690 "uuid": "7faafc44-34ad-4bfe-90ba-82a427d81111", 00:10:48.690 "is_configured": true, 00:10:48.690 "data_offset": 0, 00:10:48.690 "data_size": 65536 00:10:48.690 }, 00:10:48.690 { 00:10:48.690 "name": "BaseBdev4", 00:10:48.690 "uuid": "49e07a04-c128-4e9f-b309-7f6e8596187b", 00:10:48.690 "is_configured": true, 00:10:48.690 "data_offset": 0, 00:10:48.690 "data_size": 65536 00:10:48.690 } 00:10:48.690 ] 00:10:48.690 } 00:10:48.690 } 00:10:48.690 }' 00:10:48.690 23:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:48.690 BaseBdev2 00:10:48.690 BaseBdev3 00:10:48.690 BaseBdev4' 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.690 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.964 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.964 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.964 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:48.964 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.964 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.964 [2024-12-06 23:45:00.274789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.964 [2024-12-06 23:45:00.274834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.964 [2024-12-06 23:45:00.274929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.964 [2024-12-06 23:45:00.275012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.964 [2024-12-06 23:45:00.275023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:48.964 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.964 23:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71192 00:10:48.964 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71192 ']' 00:10:48.964 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71192 00:10:48.965 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:48.965 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.965 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71192 00:10:48.965 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.965 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.965 killing process with pid 71192 00:10:48.965 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71192' 00:10:48.965 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71192 00:10:48.965 [2024-12-06 23:45:00.324817] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:48.965 23:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71192 00:10:49.224 [2024-12-06 23:45:00.758693] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.603 ************************************ 00:10:50.603 END TEST raid_state_function_test 00:10:50.603 ************************************ 00:10:50.603 23:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:50.603 00:10:50.603 real 0m11.767s 00:10:50.603 user 0m18.417s 00:10:50.603 sys 0m2.165s 00:10:50.603 23:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.603 23:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.603 23:45:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:50.603 23:45:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:50.603 23:45:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.603 23:45:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.603 ************************************ 00:10:50.603 START TEST raid_state_function_test_sb 00:10:50.603 ************************************ 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:50.603 Process raid pid: 71863 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71863 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71863' 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71863 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71863 ']' 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.603 23:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.863 [2024-12-06 23:45:02.171063] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:10:50.863 [2024-12-06 23:45:02.171300] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.863 [2024-12-06 23:45:02.347576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.122 [2024-12-06 23:45:02.486317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.382 [2024-12-06 23:45:02.721489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.382 [2024-12-06 23:45:02.721616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.643 23:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.643 23:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:51.643 23:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:51.643 23:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.643 23:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.643 [2024-12-06 23:45:03.006011] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.643 [2024-12-06 23:45:03.006163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.643 [2024-12-06 23:45:03.006197] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:51.643 [2024-12-06 23:45:03.006221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:51.643 [2024-12-06 23:45:03.006237] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:51.643 [2024-12-06 23:45:03.006258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:51.643 [2024-12-06 23:45:03.006274] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:51.643 [2024-12-06 23:45:03.006294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.643 "name": "Existed_Raid", 00:10:51.643 "uuid": "9a0cc261-de09-4477-a66f-bb218f5dc6d2", 00:10:51.643 "strip_size_kb": 64, 00:10:51.643 "state": "configuring", 00:10:51.643 "raid_level": "concat", 00:10:51.643 "superblock": true, 00:10:51.643 "num_base_bdevs": 4, 00:10:51.643 "num_base_bdevs_discovered": 0, 00:10:51.643 "num_base_bdevs_operational": 4, 00:10:51.643 "base_bdevs_list": [ 00:10:51.643 { 00:10:51.643 "name": "BaseBdev1", 00:10:51.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.643 "is_configured": false, 00:10:51.643 "data_offset": 0, 00:10:51.643 "data_size": 0 00:10:51.643 }, 00:10:51.643 { 00:10:51.643 "name": "BaseBdev2", 00:10:51.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.643 "is_configured": false, 00:10:51.643 "data_offset": 0, 00:10:51.643 "data_size": 0 00:10:51.643 }, 00:10:51.643 { 00:10:51.643 "name": "BaseBdev3", 00:10:51.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.643 "is_configured": false, 00:10:51.643 "data_offset": 0, 00:10:51.643 "data_size": 0 00:10:51.643 }, 00:10:51.643 { 00:10:51.643 "name": "BaseBdev4", 00:10:51.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.643 "is_configured": false, 00:10:51.643 "data_offset": 0, 00:10:51.643 "data_size": 0 00:10:51.643 } 00:10:51.643 ] 00:10:51.643 }' 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.643 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.213 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.213 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.213 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.213 [2024-12-06 23:45:03.477187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.213 [2024-12-06 23:45:03.477243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:52.213 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.213 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.213 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.213 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.213 [2024-12-06 23:45:03.485153] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.213 [2024-12-06 23:45:03.485268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.213 [2024-12-06 23:45:03.485281] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.214 [2024-12-06 23:45:03.485292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.214 [2024-12-06 23:45:03.485298] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.214 [2024-12-06 23:45:03.485307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.214 [2024-12-06 23:45:03.485314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.214 [2024-12-06 23:45:03.485323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.214 [2024-12-06 23:45:03.536765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.214 BaseBdev1 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.214 [ 00:10:52.214 { 00:10:52.214 "name": "BaseBdev1", 00:10:52.214 "aliases": [ 00:10:52.214 "e6818969-ad85-4cc5-b518-ffb6be510985" 00:10:52.214 ], 00:10:52.214 "product_name": "Malloc disk", 00:10:52.214 "block_size": 512, 00:10:52.214 "num_blocks": 65536, 00:10:52.214 "uuid": "e6818969-ad85-4cc5-b518-ffb6be510985", 00:10:52.214 "assigned_rate_limits": { 00:10:52.214 "rw_ios_per_sec": 0, 00:10:52.214 "rw_mbytes_per_sec": 0, 00:10:52.214 "r_mbytes_per_sec": 0, 00:10:52.214 "w_mbytes_per_sec": 0 00:10:52.214 }, 00:10:52.214 "claimed": true, 00:10:52.214 "claim_type": "exclusive_write", 00:10:52.214 "zoned": false, 00:10:52.214 "supported_io_types": { 00:10:52.214 "read": true, 00:10:52.214 "write": true, 00:10:52.214 "unmap": true, 00:10:52.214 "flush": true, 00:10:52.214 "reset": true, 00:10:52.214 "nvme_admin": false, 00:10:52.214 "nvme_io": false, 00:10:52.214 "nvme_io_md": false, 00:10:52.214 "write_zeroes": true, 00:10:52.214 "zcopy": true, 00:10:52.214 "get_zone_info": false, 00:10:52.214 "zone_management": false, 00:10:52.214 "zone_append": false, 00:10:52.214 "compare": false, 00:10:52.214 "compare_and_write": false, 00:10:52.214 "abort": true, 00:10:52.214 "seek_hole": false, 00:10:52.214 "seek_data": false, 00:10:52.214 "copy": true, 00:10:52.214 "nvme_iov_md": false 00:10:52.214 }, 00:10:52.214 "memory_domains": [ 00:10:52.214 { 00:10:52.214 "dma_device_id": "system", 00:10:52.214 "dma_device_type": 1 00:10:52.214 }, 00:10:52.214 { 00:10:52.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.214 "dma_device_type": 2 00:10:52.214 } 00:10:52.214 ], 00:10:52.214 "driver_specific": {} 00:10:52.214 } 00:10:52.214 ] 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.214 "name": "Existed_Raid", 00:10:52.214 "uuid": "e99e9b79-c4fd-47d0-8197-e66ab63bb582", 00:10:52.214 "strip_size_kb": 64, 00:10:52.214 "state": "configuring", 00:10:52.214 "raid_level": "concat", 00:10:52.214 "superblock": true, 00:10:52.214 "num_base_bdevs": 4, 00:10:52.214 "num_base_bdevs_discovered": 1, 00:10:52.214 "num_base_bdevs_operational": 4, 00:10:52.214 "base_bdevs_list": [ 00:10:52.214 { 00:10:52.214 "name": "BaseBdev1", 00:10:52.214 "uuid": "e6818969-ad85-4cc5-b518-ffb6be510985", 00:10:52.214 "is_configured": true, 00:10:52.214 "data_offset": 2048, 00:10:52.214 "data_size": 63488 00:10:52.214 }, 00:10:52.214 { 00:10:52.214 "name": "BaseBdev2", 00:10:52.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.214 "is_configured": false, 00:10:52.214 "data_offset": 0, 00:10:52.214 "data_size": 0 00:10:52.214 }, 00:10:52.214 { 00:10:52.214 "name": "BaseBdev3", 00:10:52.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.214 "is_configured": false, 00:10:52.214 "data_offset": 0, 00:10:52.214 "data_size": 0 00:10:52.214 }, 00:10:52.214 { 00:10:52.214 "name": "BaseBdev4", 00:10:52.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.214 "is_configured": false, 00:10:52.214 "data_offset": 0, 00:10:52.214 "data_size": 0 00:10:52.214 } 00:10:52.214 ] 00:10:52.214 }' 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.214 23:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.474 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.474 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.474 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.474 [2024-12-06 23:45:04.031954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.474 [2024-12-06 23:45:04.032116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.734 [2024-12-06 23:45:04.039990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.734 [2024-12-06 23:45:04.042067] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.734 [2024-12-06 23:45:04.042144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.734 [2024-12-06 23:45:04.042171] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.734 [2024-12-06 23:45:04.042195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.734 [2024-12-06 23:45:04.042212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.734 [2024-12-06 23:45:04.042231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.734 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.734 "name": "Existed_Raid", 00:10:52.734 "uuid": "0ee71139-68dd-4045-8c38-b6371c05162f", 00:10:52.734 "strip_size_kb": 64, 00:10:52.734 "state": "configuring", 00:10:52.734 "raid_level": "concat", 00:10:52.734 "superblock": true, 00:10:52.734 "num_base_bdevs": 4, 00:10:52.734 "num_base_bdevs_discovered": 1, 00:10:52.734 "num_base_bdevs_operational": 4, 00:10:52.734 "base_bdevs_list": [ 00:10:52.734 { 00:10:52.734 "name": "BaseBdev1", 00:10:52.734 "uuid": "e6818969-ad85-4cc5-b518-ffb6be510985", 00:10:52.734 "is_configured": true, 00:10:52.734 "data_offset": 2048, 00:10:52.734 "data_size": 63488 00:10:52.734 }, 00:10:52.734 { 00:10:52.734 "name": "BaseBdev2", 00:10:52.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.734 "is_configured": false, 00:10:52.734 "data_offset": 0, 00:10:52.734 "data_size": 0 00:10:52.734 }, 00:10:52.734 { 00:10:52.734 "name": "BaseBdev3", 00:10:52.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.734 "is_configured": false, 00:10:52.734 "data_offset": 0, 00:10:52.734 "data_size": 0 00:10:52.734 }, 00:10:52.734 { 00:10:52.734 "name": "BaseBdev4", 00:10:52.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.735 "is_configured": false, 00:10:52.735 "data_offset": 0, 00:10:52.735 "data_size": 0 00:10:52.735 } 00:10:52.735 ] 00:10:52.735 }' 00:10:52.735 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.735 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.995 [2024-12-06 23:45:04.510382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.995 BaseBdev2 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.995 [ 00:10:52.995 { 00:10:52.995 "name": "BaseBdev2", 00:10:52.995 "aliases": [ 00:10:52.995 "0cb0a634-9570-4c3d-82f3-a50f8035f3ff" 00:10:52.995 ], 00:10:52.995 "product_name": "Malloc disk", 00:10:52.995 "block_size": 512, 00:10:52.995 "num_blocks": 65536, 00:10:52.995 "uuid": "0cb0a634-9570-4c3d-82f3-a50f8035f3ff", 00:10:52.995 "assigned_rate_limits": { 00:10:52.995 "rw_ios_per_sec": 0, 00:10:52.995 "rw_mbytes_per_sec": 0, 00:10:52.995 "r_mbytes_per_sec": 0, 00:10:52.995 "w_mbytes_per_sec": 0 00:10:52.995 }, 00:10:52.995 "claimed": true, 00:10:52.995 "claim_type": "exclusive_write", 00:10:52.995 "zoned": false, 00:10:52.995 "supported_io_types": { 00:10:52.995 "read": true, 00:10:52.995 "write": true, 00:10:52.995 "unmap": true, 00:10:52.995 "flush": true, 00:10:52.995 "reset": true, 00:10:52.995 "nvme_admin": false, 00:10:52.995 "nvme_io": false, 00:10:52.995 "nvme_io_md": false, 00:10:52.995 "write_zeroes": true, 00:10:52.995 "zcopy": true, 00:10:52.995 "get_zone_info": false, 00:10:52.995 "zone_management": false, 00:10:52.995 "zone_append": false, 00:10:52.995 "compare": false, 00:10:52.995 "compare_and_write": false, 00:10:52.995 "abort": true, 00:10:52.995 "seek_hole": false, 00:10:52.995 "seek_data": false, 00:10:52.995 "copy": true, 00:10:52.995 "nvme_iov_md": false 00:10:52.995 }, 00:10:52.995 "memory_domains": [ 00:10:52.995 { 00:10:52.995 "dma_device_id": "system", 00:10:52.995 "dma_device_type": 1 00:10:52.995 }, 00:10:52.995 { 00:10:52.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.995 "dma_device_type": 2 00:10:52.995 } 00:10:52.995 ], 00:10:52.995 "driver_specific": {} 00:10:52.995 } 00:10:52.995 ] 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.995 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.996 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.996 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.996 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.996 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.996 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.996 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.996 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.256 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.256 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.256 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.256 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.256 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.256 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.256 "name": "Existed_Raid", 00:10:53.256 "uuid": "0ee71139-68dd-4045-8c38-b6371c05162f", 00:10:53.256 "strip_size_kb": 64, 00:10:53.256 "state": "configuring", 00:10:53.256 "raid_level": "concat", 00:10:53.256 "superblock": true, 00:10:53.256 "num_base_bdevs": 4, 00:10:53.256 "num_base_bdevs_discovered": 2, 00:10:53.256 "num_base_bdevs_operational": 4, 00:10:53.256 "base_bdevs_list": [ 00:10:53.256 { 00:10:53.256 "name": "BaseBdev1", 00:10:53.256 "uuid": "e6818969-ad85-4cc5-b518-ffb6be510985", 00:10:53.256 "is_configured": true, 00:10:53.256 "data_offset": 2048, 00:10:53.256 "data_size": 63488 00:10:53.256 }, 00:10:53.256 { 00:10:53.256 "name": "BaseBdev2", 00:10:53.256 "uuid": "0cb0a634-9570-4c3d-82f3-a50f8035f3ff", 00:10:53.256 "is_configured": true, 00:10:53.256 "data_offset": 2048, 00:10:53.256 "data_size": 63488 00:10:53.256 }, 00:10:53.256 { 00:10:53.256 "name": "BaseBdev3", 00:10:53.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.256 "is_configured": false, 00:10:53.256 "data_offset": 0, 00:10:53.256 "data_size": 0 00:10:53.256 }, 00:10:53.256 { 00:10:53.256 "name": "BaseBdev4", 00:10:53.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.256 "is_configured": false, 00:10:53.256 "data_offset": 0, 00:10:53.256 "data_size": 0 00:10:53.256 } 00:10:53.256 ] 00:10:53.256 }' 00:10:53.256 23:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.256 23:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.516 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:53.516 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.516 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.516 [2024-12-06 23:45:05.074389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.516 BaseBdev3 00:10:53.517 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.517 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:53.517 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:53.517 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.517 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.776 [ 00:10:53.776 { 00:10:53.776 "name": "BaseBdev3", 00:10:53.776 "aliases": [ 00:10:53.776 "7c8bcb65-63a4-4a8b-9bc1-60f73c343209" 00:10:53.776 ], 00:10:53.776 "product_name": "Malloc disk", 00:10:53.776 "block_size": 512, 00:10:53.776 "num_blocks": 65536, 00:10:53.776 "uuid": "7c8bcb65-63a4-4a8b-9bc1-60f73c343209", 00:10:53.776 "assigned_rate_limits": { 00:10:53.776 "rw_ios_per_sec": 0, 00:10:53.776 "rw_mbytes_per_sec": 0, 00:10:53.776 "r_mbytes_per_sec": 0, 00:10:53.776 "w_mbytes_per_sec": 0 00:10:53.776 }, 00:10:53.776 "claimed": true, 00:10:53.776 "claim_type": "exclusive_write", 00:10:53.776 "zoned": false, 00:10:53.776 "supported_io_types": { 00:10:53.776 "read": true, 00:10:53.776 "write": true, 00:10:53.776 "unmap": true, 00:10:53.776 "flush": true, 00:10:53.776 "reset": true, 00:10:53.776 "nvme_admin": false, 00:10:53.776 "nvme_io": false, 00:10:53.776 "nvme_io_md": false, 00:10:53.776 "write_zeroes": true, 00:10:53.776 "zcopy": true, 00:10:53.776 "get_zone_info": false, 00:10:53.776 "zone_management": false, 00:10:53.776 "zone_append": false, 00:10:53.776 "compare": false, 00:10:53.776 "compare_and_write": false, 00:10:53.776 "abort": true, 00:10:53.776 "seek_hole": false, 00:10:53.776 "seek_data": false, 00:10:53.776 "copy": true, 00:10:53.776 "nvme_iov_md": false 00:10:53.776 }, 00:10:53.776 "memory_domains": [ 00:10:53.776 { 00:10:53.776 "dma_device_id": "system", 00:10:53.776 "dma_device_type": 1 00:10:53.776 }, 00:10:53.776 { 00:10:53.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.776 "dma_device_type": 2 00:10:53.776 } 00:10:53.776 ], 00:10:53.776 "driver_specific": {} 00:10:53.776 } 00:10:53.776 ] 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.776 "name": "Existed_Raid", 00:10:53.776 "uuid": "0ee71139-68dd-4045-8c38-b6371c05162f", 00:10:53.776 "strip_size_kb": 64, 00:10:53.776 "state": "configuring", 00:10:53.776 "raid_level": "concat", 00:10:53.776 "superblock": true, 00:10:53.776 "num_base_bdevs": 4, 00:10:53.776 "num_base_bdevs_discovered": 3, 00:10:53.776 "num_base_bdevs_operational": 4, 00:10:53.776 "base_bdevs_list": [ 00:10:53.776 { 00:10:53.776 "name": "BaseBdev1", 00:10:53.776 "uuid": "e6818969-ad85-4cc5-b518-ffb6be510985", 00:10:53.776 "is_configured": true, 00:10:53.776 "data_offset": 2048, 00:10:53.776 "data_size": 63488 00:10:53.776 }, 00:10:53.776 { 00:10:53.776 "name": "BaseBdev2", 00:10:53.776 "uuid": "0cb0a634-9570-4c3d-82f3-a50f8035f3ff", 00:10:53.776 "is_configured": true, 00:10:53.776 "data_offset": 2048, 00:10:53.776 "data_size": 63488 00:10:53.776 }, 00:10:53.776 { 00:10:53.776 "name": "BaseBdev3", 00:10:53.776 "uuid": "7c8bcb65-63a4-4a8b-9bc1-60f73c343209", 00:10:53.776 "is_configured": true, 00:10:53.776 "data_offset": 2048, 00:10:53.776 "data_size": 63488 00:10:53.776 }, 00:10:53.776 { 00:10:53.776 "name": "BaseBdev4", 00:10:53.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.776 "is_configured": false, 00:10:53.776 "data_offset": 0, 00:10:53.776 "data_size": 0 00:10:53.776 } 00:10:53.776 ] 00:10:53.776 }' 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.776 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.034 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:54.034 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.034 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.034 [2024-12-06 23:45:05.586102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:54.034 [2024-12-06 23:45:05.586402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:54.034 [2024-12-06 23:45:05.586419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:54.034 [2024-12-06 23:45:05.586745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:54.034 [2024-12-06 23:45:05.586948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:54.034 [2024-12-06 23:45:05.586962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:54.034 BaseBdev4 00:10:54.034 [2024-12-06 23:45:05.587122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.034 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.034 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:54.034 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:54.034 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.034 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.034 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.035 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.035 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.035 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.035 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.293 [ 00:10:54.293 { 00:10:54.293 "name": "BaseBdev4", 00:10:54.293 "aliases": [ 00:10:54.293 "d1e5f6ff-0a16-4657-9698-cf9ef9e91f89" 00:10:54.293 ], 00:10:54.293 "product_name": "Malloc disk", 00:10:54.293 "block_size": 512, 00:10:54.293 "num_blocks": 65536, 00:10:54.293 "uuid": "d1e5f6ff-0a16-4657-9698-cf9ef9e91f89", 00:10:54.293 "assigned_rate_limits": { 00:10:54.293 "rw_ios_per_sec": 0, 00:10:54.293 "rw_mbytes_per_sec": 0, 00:10:54.293 "r_mbytes_per_sec": 0, 00:10:54.293 "w_mbytes_per_sec": 0 00:10:54.293 }, 00:10:54.293 "claimed": true, 00:10:54.293 "claim_type": "exclusive_write", 00:10:54.293 "zoned": false, 00:10:54.293 "supported_io_types": { 00:10:54.293 "read": true, 00:10:54.293 "write": true, 00:10:54.293 "unmap": true, 00:10:54.293 "flush": true, 00:10:54.293 "reset": true, 00:10:54.293 "nvme_admin": false, 00:10:54.293 "nvme_io": false, 00:10:54.293 "nvme_io_md": false, 00:10:54.293 "write_zeroes": true, 00:10:54.293 "zcopy": true, 00:10:54.293 "get_zone_info": false, 00:10:54.293 "zone_management": false, 00:10:54.293 "zone_append": false, 00:10:54.293 "compare": false, 00:10:54.293 "compare_and_write": false, 00:10:54.293 "abort": true, 00:10:54.293 "seek_hole": false, 00:10:54.293 "seek_data": false, 00:10:54.293 "copy": true, 00:10:54.293 "nvme_iov_md": false 00:10:54.293 }, 00:10:54.293 "memory_domains": [ 00:10:54.293 { 00:10:54.293 "dma_device_id": "system", 00:10:54.293 "dma_device_type": 1 00:10:54.293 }, 00:10:54.293 { 00:10:54.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.293 "dma_device_type": 2 00:10:54.293 } 00:10:54.293 ], 00:10:54.293 "driver_specific": {} 00:10:54.293 } 00:10:54.293 ] 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.293 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.293 "name": "Existed_Raid", 00:10:54.293 "uuid": "0ee71139-68dd-4045-8c38-b6371c05162f", 00:10:54.293 "strip_size_kb": 64, 00:10:54.293 "state": "online", 00:10:54.293 "raid_level": "concat", 00:10:54.293 "superblock": true, 00:10:54.293 "num_base_bdevs": 4, 00:10:54.293 "num_base_bdevs_discovered": 4, 00:10:54.293 "num_base_bdevs_operational": 4, 00:10:54.293 "base_bdevs_list": [ 00:10:54.293 { 00:10:54.293 "name": "BaseBdev1", 00:10:54.293 "uuid": "e6818969-ad85-4cc5-b518-ffb6be510985", 00:10:54.293 "is_configured": true, 00:10:54.293 "data_offset": 2048, 00:10:54.293 "data_size": 63488 00:10:54.293 }, 00:10:54.293 { 00:10:54.293 "name": "BaseBdev2", 00:10:54.293 "uuid": "0cb0a634-9570-4c3d-82f3-a50f8035f3ff", 00:10:54.293 "is_configured": true, 00:10:54.293 "data_offset": 2048, 00:10:54.293 "data_size": 63488 00:10:54.293 }, 00:10:54.293 { 00:10:54.293 "name": "BaseBdev3", 00:10:54.293 "uuid": "7c8bcb65-63a4-4a8b-9bc1-60f73c343209", 00:10:54.293 "is_configured": true, 00:10:54.293 "data_offset": 2048, 00:10:54.293 "data_size": 63488 00:10:54.293 }, 00:10:54.293 { 00:10:54.293 "name": "BaseBdev4", 00:10:54.293 "uuid": "d1e5f6ff-0a16-4657-9698-cf9ef9e91f89", 00:10:54.293 "is_configured": true, 00:10:54.293 "data_offset": 2048, 00:10:54.294 "data_size": 63488 00:10:54.294 } 00:10:54.294 ] 00:10:54.294 }' 00:10:54.294 23:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.294 23:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.553 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:54.553 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:54.553 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.553 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.553 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.553 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.553 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.553 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:54.553 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.553 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.813 [2024-12-06 23:45:06.117614] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.813 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.813 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.813 "name": "Existed_Raid", 00:10:54.813 "aliases": [ 00:10:54.813 "0ee71139-68dd-4045-8c38-b6371c05162f" 00:10:54.813 ], 00:10:54.813 "product_name": "Raid Volume", 00:10:54.813 "block_size": 512, 00:10:54.813 "num_blocks": 253952, 00:10:54.813 "uuid": "0ee71139-68dd-4045-8c38-b6371c05162f", 00:10:54.813 "assigned_rate_limits": { 00:10:54.813 "rw_ios_per_sec": 0, 00:10:54.813 "rw_mbytes_per_sec": 0, 00:10:54.813 "r_mbytes_per_sec": 0, 00:10:54.813 "w_mbytes_per_sec": 0 00:10:54.813 }, 00:10:54.813 "claimed": false, 00:10:54.813 "zoned": false, 00:10:54.813 "supported_io_types": { 00:10:54.813 "read": true, 00:10:54.813 "write": true, 00:10:54.813 "unmap": true, 00:10:54.813 "flush": true, 00:10:54.813 "reset": true, 00:10:54.813 "nvme_admin": false, 00:10:54.813 "nvme_io": false, 00:10:54.813 "nvme_io_md": false, 00:10:54.813 "write_zeroes": true, 00:10:54.813 "zcopy": false, 00:10:54.813 "get_zone_info": false, 00:10:54.813 "zone_management": false, 00:10:54.813 "zone_append": false, 00:10:54.813 "compare": false, 00:10:54.813 "compare_and_write": false, 00:10:54.813 "abort": false, 00:10:54.813 "seek_hole": false, 00:10:54.813 "seek_data": false, 00:10:54.813 "copy": false, 00:10:54.813 "nvme_iov_md": false 00:10:54.813 }, 00:10:54.813 "memory_domains": [ 00:10:54.813 { 00:10:54.813 "dma_device_id": "system", 00:10:54.813 "dma_device_type": 1 00:10:54.813 }, 00:10:54.813 { 00:10:54.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.813 "dma_device_type": 2 00:10:54.813 }, 00:10:54.813 { 00:10:54.813 "dma_device_id": "system", 00:10:54.813 "dma_device_type": 1 00:10:54.813 }, 00:10:54.813 { 00:10:54.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.813 "dma_device_type": 2 00:10:54.813 }, 00:10:54.813 { 00:10:54.813 "dma_device_id": "system", 00:10:54.813 "dma_device_type": 1 00:10:54.813 }, 00:10:54.813 { 00:10:54.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.813 "dma_device_type": 2 00:10:54.813 }, 00:10:54.813 { 00:10:54.813 "dma_device_id": "system", 00:10:54.813 "dma_device_type": 1 00:10:54.813 }, 00:10:54.813 { 00:10:54.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.813 "dma_device_type": 2 00:10:54.813 } 00:10:54.813 ], 00:10:54.813 "driver_specific": { 00:10:54.813 "raid": { 00:10:54.813 "uuid": "0ee71139-68dd-4045-8c38-b6371c05162f", 00:10:54.813 "strip_size_kb": 64, 00:10:54.813 "state": "online", 00:10:54.813 "raid_level": "concat", 00:10:54.813 "superblock": true, 00:10:54.813 "num_base_bdevs": 4, 00:10:54.813 "num_base_bdevs_discovered": 4, 00:10:54.813 "num_base_bdevs_operational": 4, 00:10:54.813 "base_bdevs_list": [ 00:10:54.813 { 00:10:54.813 "name": "BaseBdev1", 00:10:54.813 "uuid": "e6818969-ad85-4cc5-b518-ffb6be510985", 00:10:54.813 "is_configured": true, 00:10:54.813 "data_offset": 2048, 00:10:54.813 "data_size": 63488 00:10:54.813 }, 00:10:54.813 { 00:10:54.813 "name": "BaseBdev2", 00:10:54.813 "uuid": "0cb0a634-9570-4c3d-82f3-a50f8035f3ff", 00:10:54.813 "is_configured": true, 00:10:54.813 "data_offset": 2048, 00:10:54.813 "data_size": 63488 00:10:54.813 }, 00:10:54.813 { 00:10:54.813 "name": "BaseBdev3", 00:10:54.813 "uuid": "7c8bcb65-63a4-4a8b-9bc1-60f73c343209", 00:10:54.813 "is_configured": true, 00:10:54.813 "data_offset": 2048, 00:10:54.813 "data_size": 63488 00:10:54.813 }, 00:10:54.813 { 00:10:54.813 "name": "BaseBdev4", 00:10:54.813 "uuid": "d1e5f6ff-0a16-4657-9698-cf9ef9e91f89", 00:10:54.813 "is_configured": true, 00:10:54.813 "data_offset": 2048, 00:10:54.813 "data_size": 63488 00:10:54.813 } 00:10:54.813 ] 00:10:54.813 } 00:10:54.813 } 00:10:54.813 }' 00:10:54.813 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.813 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:54.813 BaseBdev2 00:10:54.813 BaseBdev3 00:10:54.813 BaseBdev4' 00:10:54.813 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.813 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.813 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.813 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.814 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.073 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.073 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.073 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.073 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.073 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:55.073 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.073 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.073 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.074 [2024-12-06 23:45:06.432811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.074 [2024-12-06 23:45:06.432851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.074 [2024-12-06 23:45:06.432909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.074 "name": "Existed_Raid", 00:10:55.074 "uuid": "0ee71139-68dd-4045-8c38-b6371c05162f", 00:10:55.074 "strip_size_kb": 64, 00:10:55.074 "state": "offline", 00:10:55.074 "raid_level": "concat", 00:10:55.074 "superblock": true, 00:10:55.074 "num_base_bdevs": 4, 00:10:55.074 "num_base_bdevs_discovered": 3, 00:10:55.074 "num_base_bdevs_operational": 3, 00:10:55.074 "base_bdevs_list": [ 00:10:55.074 { 00:10:55.074 "name": null, 00:10:55.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.074 "is_configured": false, 00:10:55.074 "data_offset": 0, 00:10:55.074 "data_size": 63488 00:10:55.074 }, 00:10:55.074 { 00:10:55.074 "name": "BaseBdev2", 00:10:55.074 "uuid": "0cb0a634-9570-4c3d-82f3-a50f8035f3ff", 00:10:55.074 "is_configured": true, 00:10:55.074 "data_offset": 2048, 00:10:55.074 "data_size": 63488 00:10:55.074 }, 00:10:55.074 { 00:10:55.074 "name": "BaseBdev3", 00:10:55.074 "uuid": "7c8bcb65-63a4-4a8b-9bc1-60f73c343209", 00:10:55.074 "is_configured": true, 00:10:55.074 "data_offset": 2048, 00:10:55.074 "data_size": 63488 00:10:55.074 }, 00:10:55.074 { 00:10:55.074 "name": "BaseBdev4", 00:10:55.074 "uuid": "d1e5f6ff-0a16-4657-9698-cf9ef9e91f89", 00:10:55.074 "is_configured": true, 00:10:55.074 "data_offset": 2048, 00:10:55.074 "data_size": 63488 00:10:55.074 } 00:10:55.074 ] 00:10:55.074 }' 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.074 23:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.642 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:55.642 23:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.642 [2024-12-06 23:45:07.044824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:55.642 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.902 [2024-12-06 23:45:07.208902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.902 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.902 [2024-12-06 23:45:07.369588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:55.902 [2024-12-06 23:45:07.369754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.163 BaseBdev2 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.163 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.163 [ 00:10:56.163 { 00:10:56.163 "name": "BaseBdev2", 00:10:56.163 "aliases": [ 00:10:56.163 "5f67cc4d-9a18-452c-8976-1cf0850505bd" 00:10:56.164 ], 00:10:56.164 "product_name": "Malloc disk", 00:10:56.164 "block_size": 512, 00:10:56.164 "num_blocks": 65536, 00:10:56.164 "uuid": "5f67cc4d-9a18-452c-8976-1cf0850505bd", 00:10:56.164 "assigned_rate_limits": { 00:10:56.164 "rw_ios_per_sec": 0, 00:10:56.164 "rw_mbytes_per_sec": 0, 00:10:56.164 "r_mbytes_per_sec": 0, 00:10:56.164 "w_mbytes_per_sec": 0 00:10:56.164 }, 00:10:56.164 "claimed": false, 00:10:56.164 "zoned": false, 00:10:56.164 "supported_io_types": { 00:10:56.164 "read": true, 00:10:56.164 "write": true, 00:10:56.164 "unmap": true, 00:10:56.164 "flush": true, 00:10:56.164 "reset": true, 00:10:56.164 "nvme_admin": false, 00:10:56.164 "nvme_io": false, 00:10:56.164 "nvme_io_md": false, 00:10:56.164 "write_zeroes": true, 00:10:56.164 "zcopy": true, 00:10:56.164 "get_zone_info": false, 00:10:56.164 "zone_management": false, 00:10:56.164 "zone_append": false, 00:10:56.164 "compare": false, 00:10:56.164 "compare_and_write": false, 00:10:56.164 "abort": true, 00:10:56.164 "seek_hole": false, 00:10:56.164 "seek_data": false, 00:10:56.164 "copy": true, 00:10:56.164 "nvme_iov_md": false 00:10:56.164 }, 00:10:56.164 "memory_domains": [ 00:10:56.164 { 00:10:56.164 "dma_device_id": "system", 00:10:56.164 "dma_device_type": 1 00:10:56.164 }, 00:10:56.164 { 00:10:56.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.164 "dma_device_type": 2 00:10:56.164 } 00:10:56.164 ], 00:10:56.164 "driver_specific": {} 00:10:56.164 } 00:10:56.164 ] 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.164 BaseBdev3 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.164 [ 00:10:56.164 { 00:10:56.164 "name": "BaseBdev3", 00:10:56.164 "aliases": [ 00:10:56.164 "2ddbf4ee-02e6-4ed9-a948-f68fcc5e7901" 00:10:56.164 ], 00:10:56.164 "product_name": "Malloc disk", 00:10:56.164 "block_size": 512, 00:10:56.164 "num_blocks": 65536, 00:10:56.164 "uuid": "2ddbf4ee-02e6-4ed9-a948-f68fcc5e7901", 00:10:56.164 "assigned_rate_limits": { 00:10:56.164 "rw_ios_per_sec": 0, 00:10:56.164 "rw_mbytes_per_sec": 0, 00:10:56.164 "r_mbytes_per_sec": 0, 00:10:56.164 "w_mbytes_per_sec": 0 00:10:56.164 }, 00:10:56.164 "claimed": false, 00:10:56.164 "zoned": false, 00:10:56.164 "supported_io_types": { 00:10:56.164 "read": true, 00:10:56.164 "write": true, 00:10:56.164 "unmap": true, 00:10:56.164 "flush": true, 00:10:56.164 "reset": true, 00:10:56.164 "nvme_admin": false, 00:10:56.164 "nvme_io": false, 00:10:56.164 "nvme_io_md": false, 00:10:56.164 "write_zeroes": true, 00:10:56.164 "zcopy": true, 00:10:56.164 "get_zone_info": false, 00:10:56.164 "zone_management": false, 00:10:56.164 "zone_append": false, 00:10:56.164 "compare": false, 00:10:56.164 "compare_and_write": false, 00:10:56.164 "abort": true, 00:10:56.164 "seek_hole": false, 00:10:56.164 "seek_data": false, 00:10:56.164 "copy": true, 00:10:56.164 "nvme_iov_md": false 00:10:56.164 }, 00:10:56.164 "memory_domains": [ 00:10:56.164 { 00:10:56.164 "dma_device_id": "system", 00:10:56.164 "dma_device_type": 1 00:10:56.164 }, 00:10:56.164 { 00:10:56.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.164 "dma_device_type": 2 00:10:56.164 } 00:10:56.164 ], 00:10:56.164 "driver_specific": {} 00:10:56.164 } 00:10:56.164 ] 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.164 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.424 BaseBdev4 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.424 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.424 [ 00:10:56.424 { 00:10:56.424 "name": "BaseBdev4", 00:10:56.424 "aliases": [ 00:10:56.424 "ccc8b4b0-3a7e-471d-ab60-afbe445f928f" 00:10:56.424 ], 00:10:56.424 "product_name": "Malloc disk", 00:10:56.424 "block_size": 512, 00:10:56.424 "num_blocks": 65536, 00:10:56.424 "uuid": "ccc8b4b0-3a7e-471d-ab60-afbe445f928f", 00:10:56.424 "assigned_rate_limits": { 00:10:56.424 "rw_ios_per_sec": 0, 00:10:56.424 "rw_mbytes_per_sec": 0, 00:10:56.424 "r_mbytes_per_sec": 0, 00:10:56.424 "w_mbytes_per_sec": 0 00:10:56.424 }, 00:10:56.425 "claimed": false, 00:10:56.425 "zoned": false, 00:10:56.425 "supported_io_types": { 00:10:56.425 "read": true, 00:10:56.425 "write": true, 00:10:56.425 "unmap": true, 00:10:56.425 "flush": true, 00:10:56.425 "reset": true, 00:10:56.425 "nvme_admin": false, 00:10:56.425 "nvme_io": false, 00:10:56.425 "nvme_io_md": false, 00:10:56.425 "write_zeroes": true, 00:10:56.425 "zcopy": true, 00:10:56.425 "get_zone_info": false, 00:10:56.425 "zone_management": false, 00:10:56.425 "zone_append": false, 00:10:56.425 "compare": false, 00:10:56.425 "compare_and_write": false, 00:10:56.425 "abort": true, 00:10:56.425 "seek_hole": false, 00:10:56.425 "seek_data": false, 00:10:56.425 "copy": true, 00:10:56.425 "nvme_iov_md": false 00:10:56.425 }, 00:10:56.425 "memory_domains": [ 00:10:56.425 { 00:10:56.425 "dma_device_id": "system", 00:10:56.425 "dma_device_type": 1 00:10:56.425 }, 00:10:56.425 { 00:10:56.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.425 "dma_device_type": 2 00:10:56.425 } 00:10:56.425 ], 00:10:56.425 "driver_specific": {} 00:10:56.425 } 00:10:56.425 ] 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.425 [2024-12-06 23:45:07.789523] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.425 [2024-12-06 23:45:07.789670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.425 [2024-12-06 23:45:07.789715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.425 [2024-12-06 23:45:07.791872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.425 [2024-12-06 23:45:07.791965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.425 "name": "Existed_Raid", 00:10:56.425 "uuid": "31927f64-2542-492b-ab5f-73d0d94dbe4e", 00:10:56.425 "strip_size_kb": 64, 00:10:56.425 "state": "configuring", 00:10:56.425 "raid_level": "concat", 00:10:56.425 "superblock": true, 00:10:56.425 "num_base_bdevs": 4, 00:10:56.425 "num_base_bdevs_discovered": 3, 00:10:56.425 "num_base_bdevs_operational": 4, 00:10:56.425 "base_bdevs_list": [ 00:10:56.425 { 00:10:56.425 "name": "BaseBdev1", 00:10:56.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.425 "is_configured": false, 00:10:56.425 "data_offset": 0, 00:10:56.425 "data_size": 0 00:10:56.425 }, 00:10:56.425 { 00:10:56.425 "name": "BaseBdev2", 00:10:56.425 "uuid": "5f67cc4d-9a18-452c-8976-1cf0850505bd", 00:10:56.425 "is_configured": true, 00:10:56.425 "data_offset": 2048, 00:10:56.425 "data_size": 63488 00:10:56.425 }, 00:10:56.425 { 00:10:56.425 "name": "BaseBdev3", 00:10:56.425 "uuid": "2ddbf4ee-02e6-4ed9-a948-f68fcc5e7901", 00:10:56.425 "is_configured": true, 00:10:56.425 "data_offset": 2048, 00:10:56.425 "data_size": 63488 00:10:56.425 }, 00:10:56.425 { 00:10:56.425 "name": "BaseBdev4", 00:10:56.425 "uuid": "ccc8b4b0-3a7e-471d-ab60-afbe445f928f", 00:10:56.425 "is_configured": true, 00:10:56.425 "data_offset": 2048, 00:10:56.425 "data_size": 63488 00:10:56.425 } 00:10:56.425 ] 00:10:56.425 }' 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.425 23:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.684 [2024-12-06 23:45:08.228739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.684 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.973 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.973 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.973 "name": "Existed_Raid", 00:10:56.973 "uuid": "31927f64-2542-492b-ab5f-73d0d94dbe4e", 00:10:56.973 "strip_size_kb": 64, 00:10:56.973 "state": "configuring", 00:10:56.973 "raid_level": "concat", 00:10:56.973 "superblock": true, 00:10:56.973 "num_base_bdevs": 4, 00:10:56.973 "num_base_bdevs_discovered": 2, 00:10:56.973 "num_base_bdevs_operational": 4, 00:10:56.973 "base_bdevs_list": [ 00:10:56.973 { 00:10:56.973 "name": "BaseBdev1", 00:10:56.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.973 "is_configured": false, 00:10:56.973 "data_offset": 0, 00:10:56.973 "data_size": 0 00:10:56.973 }, 00:10:56.973 { 00:10:56.973 "name": null, 00:10:56.973 "uuid": "5f67cc4d-9a18-452c-8976-1cf0850505bd", 00:10:56.973 "is_configured": false, 00:10:56.973 "data_offset": 0, 00:10:56.973 "data_size": 63488 00:10:56.973 }, 00:10:56.973 { 00:10:56.973 "name": "BaseBdev3", 00:10:56.973 "uuid": "2ddbf4ee-02e6-4ed9-a948-f68fcc5e7901", 00:10:56.973 "is_configured": true, 00:10:56.973 "data_offset": 2048, 00:10:56.973 "data_size": 63488 00:10:56.973 }, 00:10:56.973 { 00:10:56.973 "name": "BaseBdev4", 00:10:56.973 "uuid": "ccc8b4b0-3a7e-471d-ab60-afbe445f928f", 00:10:56.973 "is_configured": true, 00:10:56.973 "data_offset": 2048, 00:10:56.973 "data_size": 63488 00:10:56.973 } 00:10:56.973 ] 00:10:56.973 }' 00:10:56.973 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.973 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.238 [2024-12-06 23:45:08.748128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.238 BaseBdev1 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.238 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.238 [ 00:10:57.238 { 00:10:57.238 "name": "BaseBdev1", 00:10:57.238 "aliases": [ 00:10:57.238 "e19190e4-9084-4aa7-aa6f-a96dd5f07565" 00:10:57.238 ], 00:10:57.238 "product_name": "Malloc disk", 00:10:57.238 "block_size": 512, 00:10:57.238 "num_blocks": 65536, 00:10:57.238 "uuid": "e19190e4-9084-4aa7-aa6f-a96dd5f07565", 00:10:57.238 "assigned_rate_limits": { 00:10:57.238 "rw_ios_per_sec": 0, 00:10:57.238 "rw_mbytes_per_sec": 0, 00:10:57.238 "r_mbytes_per_sec": 0, 00:10:57.238 "w_mbytes_per_sec": 0 00:10:57.238 }, 00:10:57.238 "claimed": true, 00:10:57.238 "claim_type": "exclusive_write", 00:10:57.238 "zoned": false, 00:10:57.238 "supported_io_types": { 00:10:57.238 "read": true, 00:10:57.238 "write": true, 00:10:57.238 "unmap": true, 00:10:57.238 "flush": true, 00:10:57.238 "reset": true, 00:10:57.238 "nvme_admin": false, 00:10:57.238 "nvme_io": false, 00:10:57.238 "nvme_io_md": false, 00:10:57.238 "write_zeroes": true, 00:10:57.238 "zcopy": true, 00:10:57.238 "get_zone_info": false, 00:10:57.238 "zone_management": false, 00:10:57.238 "zone_append": false, 00:10:57.238 "compare": false, 00:10:57.238 "compare_and_write": false, 00:10:57.238 "abort": true, 00:10:57.238 "seek_hole": false, 00:10:57.238 "seek_data": false, 00:10:57.238 "copy": true, 00:10:57.238 "nvme_iov_md": false 00:10:57.238 }, 00:10:57.238 "memory_domains": [ 00:10:57.238 { 00:10:57.238 "dma_device_id": "system", 00:10:57.238 "dma_device_type": 1 00:10:57.238 }, 00:10:57.238 { 00:10:57.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.238 "dma_device_type": 2 00:10:57.238 } 00:10:57.238 ], 00:10:57.238 "driver_specific": {} 00:10:57.239 } 00:10:57.239 ] 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.239 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.498 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.498 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.498 "name": "Existed_Raid", 00:10:57.498 "uuid": "31927f64-2542-492b-ab5f-73d0d94dbe4e", 00:10:57.498 "strip_size_kb": 64, 00:10:57.498 "state": "configuring", 00:10:57.498 "raid_level": "concat", 00:10:57.498 "superblock": true, 00:10:57.498 "num_base_bdevs": 4, 00:10:57.498 "num_base_bdevs_discovered": 3, 00:10:57.498 "num_base_bdevs_operational": 4, 00:10:57.498 "base_bdevs_list": [ 00:10:57.498 { 00:10:57.498 "name": "BaseBdev1", 00:10:57.498 "uuid": "e19190e4-9084-4aa7-aa6f-a96dd5f07565", 00:10:57.498 "is_configured": true, 00:10:57.498 "data_offset": 2048, 00:10:57.498 "data_size": 63488 00:10:57.498 }, 00:10:57.498 { 00:10:57.498 "name": null, 00:10:57.498 "uuid": "5f67cc4d-9a18-452c-8976-1cf0850505bd", 00:10:57.498 "is_configured": false, 00:10:57.498 "data_offset": 0, 00:10:57.498 "data_size": 63488 00:10:57.498 }, 00:10:57.498 { 00:10:57.498 "name": "BaseBdev3", 00:10:57.498 "uuid": "2ddbf4ee-02e6-4ed9-a948-f68fcc5e7901", 00:10:57.498 "is_configured": true, 00:10:57.498 "data_offset": 2048, 00:10:57.498 "data_size": 63488 00:10:57.498 }, 00:10:57.498 { 00:10:57.498 "name": "BaseBdev4", 00:10:57.498 "uuid": "ccc8b4b0-3a7e-471d-ab60-afbe445f928f", 00:10:57.498 "is_configured": true, 00:10:57.498 "data_offset": 2048, 00:10:57.498 "data_size": 63488 00:10:57.498 } 00:10:57.498 ] 00:10:57.498 }' 00:10:57.498 23:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.498 23:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.757 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.757 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.757 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.757 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:57.757 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.757 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:57.757 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:57.757 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.757 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.757 [2024-12-06 23:45:09.315275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.016 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.017 "name": "Existed_Raid", 00:10:58.017 "uuid": "31927f64-2542-492b-ab5f-73d0d94dbe4e", 00:10:58.017 "strip_size_kb": 64, 00:10:58.017 "state": "configuring", 00:10:58.017 "raid_level": "concat", 00:10:58.017 "superblock": true, 00:10:58.017 "num_base_bdevs": 4, 00:10:58.017 "num_base_bdevs_discovered": 2, 00:10:58.017 "num_base_bdevs_operational": 4, 00:10:58.017 "base_bdevs_list": [ 00:10:58.017 { 00:10:58.017 "name": "BaseBdev1", 00:10:58.017 "uuid": "e19190e4-9084-4aa7-aa6f-a96dd5f07565", 00:10:58.017 "is_configured": true, 00:10:58.017 "data_offset": 2048, 00:10:58.017 "data_size": 63488 00:10:58.017 }, 00:10:58.017 { 00:10:58.017 "name": null, 00:10:58.017 "uuid": "5f67cc4d-9a18-452c-8976-1cf0850505bd", 00:10:58.017 "is_configured": false, 00:10:58.017 "data_offset": 0, 00:10:58.017 "data_size": 63488 00:10:58.017 }, 00:10:58.017 { 00:10:58.017 "name": null, 00:10:58.017 "uuid": "2ddbf4ee-02e6-4ed9-a948-f68fcc5e7901", 00:10:58.017 "is_configured": false, 00:10:58.017 "data_offset": 0, 00:10:58.017 "data_size": 63488 00:10:58.017 }, 00:10:58.017 { 00:10:58.017 "name": "BaseBdev4", 00:10:58.017 "uuid": "ccc8b4b0-3a7e-471d-ab60-afbe445f928f", 00:10:58.017 "is_configured": true, 00:10:58.017 "data_offset": 2048, 00:10:58.017 "data_size": 63488 00:10:58.017 } 00:10:58.017 ] 00:10:58.017 }' 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.017 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.277 [2024-12-06 23:45:09.790637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.277 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.535 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.535 "name": "Existed_Raid", 00:10:58.535 "uuid": "31927f64-2542-492b-ab5f-73d0d94dbe4e", 00:10:58.535 "strip_size_kb": 64, 00:10:58.535 "state": "configuring", 00:10:58.535 "raid_level": "concat", 00:10:58.535 "superblock": true, 00:10:58.535 "num_base_bdevs": 4, 00:10:58.535 "num_base_bdevs_discovered": 3, 00:10:58.535 "num_base_bdevs_operational": 4, 00:10:58.535 "base_bdevs_list": [ 00:10:58.535 { 00:10:58.535 "name": "BaseBdev1", 00:10:58.535 "uuid": "e19190e4-9084-4aa7-aa6f-a96dd5f07565", 00:10:58.535 "is_configured": true, 00:10:58.535 "data_offset": 2048, 00:10:58.535 "data_size": 63488 00:10:58.535 }, 00:10:58.535 { 00:10:58.535 "name": null, 00:10:58.535 "uuid": "5f67cc4d-9a18-452c-8976-1cf0850505bd", 00:10:58.535 "is_configured": false, 00:10:58.535 "data_offset": 0, 00:10:58.535 "data_size": 63488 00:10:58.535 }, 00:10:58.535 { 00:10:58.535 "name": "BaseBdev3", 00:10:58.535 "uuid": "2ddbf4ee-02e6-4ed9-a948-f68fcc5e7901", 00:10:58.535 "is_configured": true, 00:10:58.535 "data_offset": 2048, 00:10:58.535 "data_size": 63488 00:10:58.535 }, 00:10:58.535 { 00:10:58.535 "name": "BaseBdev4", 00:10:58.535 "uuid": "ccc8b4b0-3a7e-471d-ab60-afbe445f928f", 00:10:58.535 "is_configured": true, 00:10:58.535 "data_offset": 2048, 00:10:58.535 "data_size": 63488 00:10:58.535 } 00:10:58.535 ] 00:10:58.535 }' 00:10:58.535 23:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.535 23:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.794 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.794 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.794 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.794 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:58.794 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.794 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:58.794 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:58.794 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.794 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.794 [2024-12-06 23:45:10.305761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.054 "name": "Existed_Raid", 00:10:59.054 "uuid": "31927f64-2542-492b-ab5f-73d0d94dbe4e", 00:10:59.054 "strip_size_kb": 64, 00:10:59.054 "state": "configuring", 00:10:59.054 "raid_level": "concat", 00:10:59.054 "superblock": true, 00:10:59.054 "num_base_bdevs": 4, 00:10:59.054 "num_base_bdevs_discovered": 2, 00:10:59.054 "num_base_bdevs_operational": 4, 00:10:59.054 "base_bdevs_list": [ 00:10:59.054 { 00:10:59.054 "name": null, 00:10:59.054 "uuid": "e19190e4-9084-4aa7-aa6f-a96dd5f07565", 00:10:59.054 "is_configured": false, 00:10:59.054 "data_offset": 0, 00:10:59.054 "data_size": 63488 00:10:59.054 }, 00:10:59.054 { 00:10:59.054 "name": null, 00:10:59.054 "uuid": "5f67cc4d-9a18-452c-8976-1cf0850505bd", 00:10:59.054 "is_configured": false, 00:10:59.054 "data_offset": 0, 00:10:59.054 "data_size": 63488 00:10:59.054 }, 00:10:59.054 { 00:10:59.054 "name": "BaseBdev3", 00:10:59.054 "uuid": "2ddbf4ee-02e6-4ed9-a948-f68fcc5e7901", 00:10:59.054 "is_configured": true, 00:10:59.054 "data_offset": 2048, 00:10:59.054 "data_size": 63488 00:10:59.054 }, 00:10:59.054 { 00:10:59.054 "name": "BaseBdev4", 00:10:59.054 "uuid": "ccc8b4b0-3a7e-471d-ab60-afbe445f928f", 00:10:59.054 "is_configured": true, 00:10:59.054 "data_offset": 2048, 00:10:59.054 "data_size": 63488 00:10:59.054 } 00:10:59.054 ] 00:10:59.054 }' 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.054 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.314 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:59.314 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.314 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.314 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.574 [2024-12-06 23:45:10.908396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.574 "name": "Existed_Raid", 00:10:59.574 "uuid": "31927f64-2542-492b-ab5f-73d0d94dbe4e", 00:10:59.574 "strip_size_kb": 64, 00:10:59.574 "state": "configuring", 00:10:59.574 "raid_level": "concat", 00:10:59.574 "superblock": true, 00:10:59.574 "num_base_bdevs": 4, 00:10:59.574 "num_base_bdevs_discovered": 3, 00:10:59.574 "num_base_bdevs_operational": 4, 00:10:59.574 "base_bdevs_list": [ 00:10:59.574 { 00:10:59.574 "name": null, 00:10:59.574 "uuid": "e19190e4-9084-4aa7-aa6f-a96dd5f07565", 00:10:59.574 "is_configured": false, 00:10:59.574 "data_offset": 0, 00:10:59.574 "data_size": 63488 00:10:59.574 }, 00:10:59.574 { 00:10:59.574 "name": "BaseBdev2", 00:10:59.574 "uuid": "5f67cc4d-9a18-452c-8976-1cf0850505bd", 00:10:59.574 "is_configured": true, 00:10:59.574 "data_offset": 2048, 00:10:59.574 "data_size": 63488 00:10:59.574 }, 00:10:59.574 { 00:10:59.574 "name": "BaseBdev3", 00:10:59.574 "uuid": "2ddbf4ee-02e6-4ed9-a948-f68fcc5e7901", 00:10:59.574 "is_configured": true, 00:10:59.574 "data_offset": 2048, 00:10:59.574 "data_size": 63488 00:10:59.574 }, 00:10:59.574 { 00:10:59.574 "name": "BaseBdev4", 00:10:59.574 "uuid": "ccc8b4b0-3a7e-471d-ab60-afbe445f928f", 00:10:59.574 "is_configured": true, 00:10:59.574 "data_offset": 2048, 00:10:59.574 "data_size": 63488 00:10:59.574 } 00:10:59.574 ] 00:10:59.574 }' 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.574 23:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.834 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.834 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.834 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.834 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.834 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e19190e4-9084-4aa7-aa6f-a96dd5f07565 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.094 [2024-12-06 23:45:11.497995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:00.094 [2024-12-06 23:45:11.498324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:00.094 NewBaseBdev 00:11:00.094 [2024-12-06 23:45:11.498373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:00.094 [2024-12-06 23:45:11.498676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:00.094 [2024-12-06 23:45:11.498833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:00.094 [2024-12-06 23:45:11.498845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:00.094 [2024-12-06 23:45:11.499009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.094 [ 00:11:00.094 { 00:11:00.094 "name": "NewBaseBdev", 00:11:00.094 "aliases": [ 00:11:00.094 "e19190e4-9084-4aa7-aa6f-a96dd5f07565" 00:11:00.094 ], 00:11:00.094 "product_name": "Malloc disk", 00:11:00.094 "block_size": 512, 00:11:00.094 "num_blocks": 65536, 00:11:00.094 "uuid": "e19190e4-9084-4aa7-aa6f-a96dd5f07565", 00:11:00.094 "assigned_rate_limits": { 00:11:00.094 "rw_ios_per_sec": 0, 00:11:00.094 "rw_mbytes_per_sec": 0, 00:11:00.094 "r_mbytes_per_sec": 0, 00:11:00.094 "w_mbytes_per_sec": 0 00:11:00.094 }, 00:11:00.094 "claimed": true, 00:11:00.094 "claim_type": "exclusive_write", 00:11:00.094 "zoned": false, 00:11:00.094 "supported_io_types": { 00:11:00.094 "read": true, 00:11:00.094 "write": true, 00:11:00.094 "unmap": true, 00:11:00.094 "flush": true, 00:11:00.094 "reset": true, 00:11:00.094 "nvme_admin": false, 00:11:00.094 "nvme_io": false, 00:11:00.094 "nvme_io_md": false, 00:11:00.094 "write_zeroes": true, 00:11:00.094 "zcopy": true, 00:11:00.094 "get_zone_info": false, 00:11:00.094 "zone_management": false, 00:11:00.094 "zone_append": false, 00:11:00.094 "compare": false, 00:11:00.094 "compare_and_write": false, 00:11:00.094 "abort": true, 00:11:00.094 "seek_hole": false, 00:11:00.094 "seek_data": false, 00:11:00.094 "copy": true, 00:11:00.094 "nvme_iov_md": false 00:11:00.094 }, 00:11:00.094 "memory_domains": [ 00:11:00.094 { 00:11:00.094 "dma_device_id": "system", 00:11:00.094 "dma_device_type": 1 00:11:00.094 }, 00:11:00.094 { 00:11:00.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.094 "dma_device_type": 2 00:11:00.094 } 00:11:00.094 ], 00:11:00.094 "driver_specific": {} 00:11:00.094 } 00:11:00.094 ] 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.094 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.094 "name": "Existed_Raid", 00:11:00.094 "uuid": "31927f64-2542-492b-ab5f-73d0d94dbe4e", 00:11:00.094 "strip_size_kb": 64, 00:11:00.094 "state": "online", 00:11:00.094 "raid_level": "concat", 00:11:00.094 "superblock": true, 00:11:00.094 "num_base_bdevs": 4, 00:11:00.094 "num_base_bdevs_discovered": 4, 00:11:00.094 "num_base_bdevs_operational": 4, 00:11:00.094 "base_bdevs_list": [ 00:11:00.094 { 00:11:00.094 "name": "NewBaseBdev", 00:11:00.094 "uuid": "e19190e4-9084-4aa7-aa6f-a96dd5f07565", 00:11:00.094 "is_configured": true, 00:11:00.095 "data_offset": 2048, 00:11:00.095 "data_size": 63488 00:11:00.095 }, 00:11:00.095 { 00:11:00.095 "name": "BaseBdev2", 00:11:00.095 "uuid": "5f67cc4d-9a18-452c-8976-1cf0850505bd", 00:11:00.095 "is_configured": true, 00:11:00.095 "data_offset": 2048, 00:11:00.095 "data_size": 63488 00:11:00.095 }, 00:11:00.095 { 00:11:00.095 "name": "BaseBdev3", 00:11:00.095 "uuid": "2ddbf4ee-02e6-4ed9-a948-f68fcc5e7901", 00:11:00.095 "is_configured": true, 00:11:00.095 "data_offset": 2048, 00:11:00.095 "data_size": 63488 00:11:00.095 }, 00:11:00.095 { 00:11:00.095 "name": "BaseBdev4", 00:11:00.095 "uuid": "ccc8b4b0-3a7e-471d-ab60-afbe445f928f", 00:11:00.095 "is_configured": true, 00:11:00.095 "data_offset": 2048, 00:11:00.095 "data_size": 63488 00:11:00.095 } 00:11:00.095 ] 00:11:00.095 }' 00:11:00.095 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.095 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.665 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.665 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:00.665 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.665 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.665 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.665 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.665 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:00.665 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.665 23:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.665 23:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.665 [2024-12-06 23:45:11.985607] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.665 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.665 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.665 "name": "Existed_Raid", 00:11:00.665 "aliases": [ 00:11:00.665 "31927f64-2542-492b-ab5f-73d0d94dbe4e" 00:11:00.665 ], 00:11:00.665 "product_name": "Raid Volume", 00:11:00.665 "block_size": 512, 00:11:00.665 "num_blocks": 253952, 00:11:00.666 "uuid": "31927f64-2542-492b-ab5f-73d0d94dbe4e", 00:11:00.666 "assigned_rate_limits": { 00:11:00.666 "rw_ios_per_sec": 0, 00:11:00.666 "rw_mbytes_per_sec": 0, 00:11:00.666 "r_mbytes_per_sec": 0, 00:11:00.666 "w_mbytes_per_sec": 0 00:11:00.666 }, 00:11:00.666 "claimed": false, 00:11:00.666 "zoned": false, 00:11:00.666 "supported_io_types": { 00:11:00.666 "read": true, 00:11:00.666 "write": true, 00:11:00.666 "unmap": true, 00:11:00.666 "flush": true, 00:11:00.666 "reset": true, 00:11:00.666 "nvme_admin": false, 00:11:00.666 "nvme_io": false, 00:11:00.666 "nvme_io_md": false, 00:11:00.666 "write_zeroes": true, 00:11:00.666 "zcopy": false, 00:11:00.666 "get_zone_info": false, 00:11:00.666 "zone_management": false, 00:11:00.666 "zone_append": false, 00:11:00.666 "compare": false, 00:11:00.666 "compare_and_write": false, 00:11:00.666 "abort": false, 00:11:00.666 "seek_hole": false, 00:11:00.666 "seek_data": false, 00:11:00.666 "copy": false, 00:11:00.666 "nvme_iov_md": false 00:11:00.666 }, 00:11:00.666 "memory_domains": [ 00:11:00.666 { 00:11:00.666 "dma_device_id": "system", 00:11:00.666 "dma_device_type": 1 00:11:00.666 }, 00:11:00.666 { 00:11:00.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.666 "dma_device_type": 2 00:11:00.666 }, 00:11:00.666 { 00:11:00.666 "dma_device_id": "system", 00:11:00.666 "dma_device_type": 1 00:11:00.666 }, 00:11:00.666 { 00:11:00.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.666 "dma_device_type": 2 00:11:00.666 }, 00:11:00.666 { 00:11:00.666 "dma_device_id": "system", 00:11:00.666 "dma_device_type": 1 00:11:00.666 }, 00:11:00.666 { 00:11:00.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.666 "dma_device_type": 2 00:11:00.666 }, 00:11:00.666 { 00:11:00.666 "dma_device_id": "system", 00:11:00.666 "dma_device_type": 1 00:11:00.666 }, 00:11:00.666 { 00:11:00.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.666 "dma_device_type": 2 00:11:00.666 } 00:11:00.666 ], 00:11:00.666 "driver_specific": { 00:11:00.666 "raid": { 00:11:00.666 "uuid": "31927f64-2542-492b-ab5f-73d0d94dbe4e", 00:11:00.666 "strip_size_kb": 64, 00:11:00.666 "state": "online", 00:11:00.666 "raid_level": "concat", 00:11:00.666 "superblock": true, 00:11:00.666 "num_base_bdevs": 4, 00:11:00.666 "num_base_bdevs_discovered": 4, 00:11:00.666 "num_base_bdevs_operational": 4, 00:11:00.666 "base_bdevs_list": [ 00:11:00.666 { 00:11:00.666 "name": "NewBaseBdev", 00:11:00.666 "uuid": "e19190e4-9084-4aa7-aa6f-a96dd5f07565", 00:11:00.666 "is_configured": true, 00:11:00.666 "data_offset": 2048, 00:11:00.666 "data_size": 63488 00:11:00.666 }, 00:11:00.666 { 00:11:00.666 "name": "BaseBdev2", 00:11:00.666 "uuid": "5f67cc4d-9a18-452c-8976-1cf0850505bd", 00:11:00.666 "is_configured": true, 00:11:00.666 "data_offset": 2048, 00:11:00.666 "data_size": 63488 00:11:00.666 }, 00:11:00.666 { 00:11:00.666 "name": "BaseBdev3", 00:11:00.666 "uuid": "2ddbf4ee-02e6-4ed9-a948-f68fcc5e7901", 00:11:00.666 "is_configured": true, 00:11:00.666 "data_offset": 2048, 00:11:00.666 "data_size": 63488 00:11:00.666 }, 00:11:00.666 { 00:11:00.666 "name": "BaseBdev4", 00:11:00.666 "uuid": "ccc8b4b0-3a7e-471d-ab60-afbe445f928f", 00:11:00.666 "is_configured": true, 00:11:00.666 "data_offset": 2048, 00:11:00.666 "data_size": 63488 00:11:00.666 } 00:11:00.666 ] 00:11:00.666 } 00:11:00.666 } 00:11:00.666 }' 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:00.666 BaseBdev2 00:11:00.666 BaseBdev3 00:11:00.666 BaseBdev4' 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.666 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.927 [2024-12-06 23:45:12.284767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.927 [2024-12-06 23:45:12.284815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.927 [2024-12-06 23:45:12.284906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.927 [2024-12-06 23:45:12.284990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.927 [2024-12-06 23:45:12.285002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71863 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71863 ']' 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71863 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71863 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71863' 00:11:00.927 killing process with pid 71863 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71863 00:11:00.927 [2024-12-06 23:45:12.320012] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.927 23:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71863 00:11:01.202 [2024-12-06 23:45:12.755314] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.583 23:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:02.583 00:11:02.583 real 0m11.927s 00:11:02.583 user 0m18.779s 00:11:02.583 sys 0m2.164s 00:11:02.583 ************************************ 00:11:02.583 END TEST raid_state_function_test_sb 00:11:02.583 ************************************ 00:11:02.583 23:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.583 23:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.583 23:45:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:02.583 23:45:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:02.583 23:45:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.583 23:45:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.583 ************************************ 00:11:02.583 START TEST raid_superblock_test 00:11:02.583 ************************************ 00:11:02.583 23:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:02.583 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72545 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72545 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72545 ']' 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.584 23:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.844 [2024-12-06 23:45:14.163482] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:11:02.844 [2024-12-06 23:45:14.163696] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72545 ] 00:11:02.844 [2024-12-06 23:45:14.318679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.103 [2024-12-06 23:45:14.459251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.363 [2024-12-06 23:45:14.693385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.363 [2024-12-06 23:45:14.693503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.624 23:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.624 malloc1 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.624 [2024-12-06 23:45:15.054891] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:03.624 [2024-12-06 23:45:15.055062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.624 [2024-12-06 23:45:15.055105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:03.624 [2024-12-06 23:45:15.055138] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.624 [2024-12-06 23:45:15.057542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.624 [2024-12-06 23:45:15.057616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:03.624 pt1 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.624 malloc2 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.624 [2024-12-06 23:45:15.121749] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:03.624 [2024-12-06 23:45:15.121877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.624 [2024-12-06 23:45:15.121912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:03.624 [2024-12-06 23:45:15.121921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.624 [2024-12-06 23:45:15.124318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.624 [2024-12-06 23:45:15.124354] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:03.624 pt2 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.624 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.884 malloc3 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.884 [2024-12-06 23:45:15.194215] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:03.884 [2024-12-06 23:45:15.194348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.884 [2024-12-06 23:45:15.194388] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:03.884 [2024-12-06 23:45:15.194418] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.884 [2024-12-06 23:45:15.196881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.884 [2024-12-06 23:45:15.196950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:03.884 pt3 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.884 malloc4 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.884 [2024-12-06 23:45:15.261242] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:03.884 [2024-12-06 23:45:15.261379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.884 [2024-12-06 23:45:15.261421] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:03.884 [2024-12-06 23:45:15.261451] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.884 [2024-12-06 23:45:15.263881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.884 [2024-12-06 23:45:15.263955] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:03.884 pt4 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.884 [2024-12-06 23:45:15.273262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:03.884 [2024-12-06 23:45:15.275382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:03.884 [2024-12-06 23:45:15.275515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:03.884 [2024-12-06 23:45:15.275589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:03.884 [2024-12-06 23:45:15.275851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:03.884 [2024-12-06 23:45:15.275898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:03.884 [2024-12-06 23:45:15.276187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:03.884 [2024-12-06 23:45:15.276417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:03.884 [2024-12-06 23:45:15.276464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:03.884 [2024-12-06 23:45:15.276657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.884 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.884 "name": "raid_bdev1", 00:11:03.884 "uuid": "9dff6b35-713c-441f-8870-dcc5d4c529f5", 00:11:03.884 "strip_size_kb": 64, 00:11:03.884 "state": "online", 00:11:03.884 "raid_level": "concat", 00:11:03.884 "superblock": true, 00:11:03.885 "num_base_bdevs": 4, 00:11:03.885 "num_base_bdevs_discovered": 4, 00:11:03.885 "num_base_bdevs_operational": 4, 00:11:03.885 "base_bdevs_list": [ 00:11:03.885 { 00:11:03.885 "name": "pt1", 00:11:03.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.885 "is_configured": true, 00:11:03.885 "data_offset": 2048, 00:11:03.885 "data_size": 63488 00:11:03.885 }, 00:11:03.885 { 00:11:03.885 "name": "pt2", 00:11:03.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.885 "is_configured": true, 00:11:03.885 "data_offset": 2048, 00:11:03.885 "data_size": 63488 00:11:03.885 }, 00:11:03.885 { 00:11:03.885 "name": "pt3", 00:11:03.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.885 "is_configured": true, 00:11:03.885 "data_offset": 2048, 00:11:03.885 "data_size": 63488 00:11:03.885 }, 00:11:03.885 { 00:11:03.885 "name": "pt4", 00:11:03.885 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:03.885 "is_configured": true, 00:11:03.885 "data_offset": 2048, 00:11:03.885 "data_size": 63488 00:11:03.885 } 00:11:03.885 ] 00:11:03.885 }' 00:11:03.885 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.885 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.143 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:04.143 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:04.143 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:04.143 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:04.143 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:04.143 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:04.143 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:04.143 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.143 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.143 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:04.143 [2024-12-06 23:45:15.696880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:04.402 "name": "raid_bdev1", 00:11:04.402 "aliases": [ 00:11:04.402 "9dff6b35-713c-441f-8870-dcc5d4c529f5" 00:11:04.402 ], 00:11:04.402 "product_name": "Raid Volume", 00:11:04.402 "block_size": 512, 00:11:04.402 "num_blocks": 253952, 00:11:04.402 "uuid": "9dff6b35-713c-441f-8870-dcc5d4c529f5", 00:11:04.402 "assigned_rate_limits": { 00:11:04.402 "rw_ios_per_sec": 0, 00:11:04.402 "rw_mbytes_per_sec": 0, 00:11:04.402 "r_mbytes_per_sec": 0, 00:11:04.402 "w_mbytes_per_sec": 0 00:11:04.402 }, 00:11:04.402 "claimed": false, 00:11:04.402 "zoned": false, 00:11:04.402 "supported_io_types": { 00:11:04.402 "read": true, 00:11:04.402 "write": true, 00:11:04.402 "unmap": true, 00:11:04.402 "flush": true, 00:11:04.402 "reset": true, 00:11:04.402 "nvme_admin": false, 00:11:04.402 "nvme_io": false, 00:11:04.402 "nvme_io_md": false, 00:11:04.402 "write_zeroes": true, 00:11:04.402 "zcopy": false, 00:11:04.402 "get_zone_info": false, 00:11:04.402 "zone_management": false, 00:11:04.402 "zone_append": false, 00:11:04.402 "compare": false, 00:11:04.402 "compare_and_write": false, 00:11:04.402 "abort": false, 00:11:04.402 "seek_hole": false, 00:11:04.402 "seek_data": false, 00:11:04.402 "copy": false, 00:11:04.402 "nvme_iov_md": false 00:11:04.402 }, 00:11:04.402 "memory_domains": [ 00:11:04.402 { 00:11:04.402 "dma_device_id": "system", 00:11:04.402 "dma_device_type": 1 00:11:04.402 }, 00:11:04.402 { 00:11:04.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.402 "dma_device_type": 2 00:11:04.402 }, 00:11:04.402 { 00:11:04.402 "dma_device_id": "system", 00:11:04.402 "dma_device_type": 1 00:11:04.402 }, 00:11:04.402 { 00:11:04.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.402 "dma_device_type": 2 00:11:04.402 }, 00:11:04.402 { 00:11:04.402 "dma_device_id": "system", 00:11:04.402 "dma_device_type": 1 00:11:04.402 }, 00:11:04.402 { 00:11:04.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.402 "dma_device_type": 2 00:11:04.402 }, 00:11:04.402 { 00:11:04.402 "dma_device_id": "system", 00:11:04.402 "dma_device_type": 1 00:11:04.402 }, 00:11:04.402 { 00:11:04.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.402 "dma_device_type": 2 00:11:04.402 } 00:11:04.402 ], 00:11:04.402 "driver_specific": { 00:11:04.402 "raid": { 00:11:04.402 "uuid": "9dff6b35-713c-441f-8870-dcc5d4c529f5", 00:11:04.402 "strip_size_kb": 64, 00:11:04.402 "state": "online", 00:11:04.402 "raid_level": "concat", 00:11:04.402 "superblock": true, 00:11:04.402 "num_base_bdevs": 4, 00:11:04.402 "num_base_bdevs_discovered": 4, 00:11:04.402 "num_base_bdevs_operational": 4, 00:11:04.402 "base_bdevs_list": [ 00:11:04.402 { 00:11:04.402 "name": "pt1", 00:11:04.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.402 "is_configured": true, 00:11:04.402 "data_offset": 2048, 00:11:04.402 "data_size": 63488 00:11:04.402 }, 00:11:04.402 { 00:11:04.402 "name": "pt2", 00:11:04.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.402 "is_configured": true, 00:11:04.402 "data_offset": 2048, 00:11:04.402 "data_size": 63488 00:11:04.402 }, 00:11:04.402 { 00:11:04.402 "name": "pt3", 00:11:04.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.402 "is_configured": true, 00:11:04.402 "data_offset": 2048, 00:11:04.402 "data_size": 63488 00:11:04.402 }, 00:11:04.402 { 00:11:04.402 "name": "pt4", 00:11:04.402 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.402 "is_configured": true, 00:11:04.402 "data_offset": 2048, 00:11:04.402 "data_size": 63488 00:11:04.402 } 00:11:04.402 ] 00:11:04.402 } 00:11:04.402 } 00:11:04.402 }' 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:04.402 pt2 00:11:04.402 pt3 00:11:04.402 pt4' 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.402 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.662 23:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.662 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.662 23:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.662 [2024-12-06 23:45:16.008236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9dff6b35-713c-441f-8870-dcc5d4c529f5 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9dff6b35-713c-441f-8870-dcc5d4c529f5 ']' 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.662 [2024-12-06 23:45:16.055844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.662 [2024-12-06 23:45:16.055913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.662 [2024-12-06 23:45:16.056026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.662 [2024-12-06 23:45:16.056124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.662 [2024-12-06 23:45:16.056190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.662 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.662 [2024-12-06 23:45:16.215586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:04.662 [2024-12-06 23:45:16.217689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:04.662 [2024-12-06 23:45:16.217732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:04.662 [2024-12-06 23:45:16.217763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:04.662 [2024-12-06 23:45:16.217814] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:04.662 [2024-12-06 23:45:16.217874] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:04.662 [2024-12-06 23:45:16.217892] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:04.662 [2024-12-06 23:45:16.217910] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:04.662 [2024-12-06 23:45:16.217921] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.662 [2024-12-06 23:45:16.217932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:04.922 request: 00:11:04.922 { 00:11:04.922 "name": "raid_bdev1", 00:11:04.922 "raid_level": "concat", 00:11:04.922 "base_bdevs": [ 00:11:04.922 "malloc1", 00:11:04.922 "malloc2", 00:11:04.922 "malloc3", 00:11:04.922 "malloc4" 00:11:04.922 ], 00:11:04.922 "strip_size_kb": 64, 00:11:04.922 "superblock": false, 00:11:04.922 "method": "bdev_raid_create", 00:11:04.922 "req_id": 1 00:11:04.922 } 00:11:04.922 Got JSON-RPC error response 00:11:04.922 response: 00:11:04.922 { 00:11:04.922 "code": -17, 00:11:04.922 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:04.922 } 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.922 [2024-12-06 23:45:16.283440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.922 [2024-12-06 23:45:16.283553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.922 [2024-12-06 23:45:16.283593] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:04.922 [2024-12-06 23:45:16.283626] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.922 [2024-12-06 23:45:16.286094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.922 [2024-12-06 23:45:16.286166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.922 [2024-12-06 23:45:16.286265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:04.922 [2024-12-06 23:45:16.286338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:04.922 pt1 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.922 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.922 "name": "raid_bdev1", 00:11:04.922 "uuid": "9dff6b35-713c-441f-8870-dcc5d4c529f5", 00:11:04.922 "strip_size_kb": 64, 00:11:04.922 "state": "configuring", 00:11:04.922 "raid_level": "concat", 00:11:04.922 "superblock": true, 00:11:04.922 "num_base_bdevs": 4, 00:11:04.922 "num_base_bdevs_discovered": 1, 00:11:04.922 "num_base_bdevs_operational": 4, 00:11:04.922 "base_bdevs_list": [ 00:11:04.922 { 00:11:04.922 "name": "pt1", 00:11:04.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.922 "is_configured": true, 00:11:04.922 "data_offset": 2048, 00:11:04.922 "data_size": 63488 00:11:04.922 }, 00:11:04.922 { 00:11:04.922 "name": null, 00:11:04.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.922 "is_configured": false, 00:11:04.922 "data_offset": 2048, 00:11:04.922 "data_size": 63488 00:11:04.922 }, 00:11:04.922 { 00:11:04.922 "name": null, 00:11:04.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.923 "is_configured": false, 00:11:04.923 "data_offset": 2048, 00:11:04.923 "data_size": 63488 00:11:04.923 }, 00:11:04.923 { 00:11:04.923 "name": null, 00:11:04.923 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.923 "is_configured": false, 00:11:04.923 "data_offset": 2048, 00:11:04.923 "data_size": 63488 00:11:04.923 } 00:11:04.923 ] 00:11:04.923 }' 00:11:04.923 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.923 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.182 [2024-12-06 23:45:16.722827] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:05.182 [2024-12-06 23:45:16.723027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.182 [2024-12-06 23:45:16.723081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:05.182 [2024-12-06 23:45:16.723116] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.182 [2024-12-06 23:45:16.723646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.182 [2024-12-06 23:45:16.723690] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:05.182 [2024-12-06 23:45:16.723789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:05.182 [2024-12-06 23:45:16.723819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.182 pt2 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.182 [2024-12-06 23:45:16.734783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.182 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.442 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.442 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.442 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.442 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.442 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.442 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.442 "name": "raid_bdev1", 00:11:05.442 "uuid": "9dff6b35-713c-441f-8870-dcc5d4c529f5", 00:11:05.442 "strip_size_kb": 64, 00:11:05.442 "state": "configuring", 00:11:05.442 "raid_level": "concat", 00:11:05.442 "superblock": true, 00:11:05.442 "num_base_bdevs": 4, 00:11:05.442 "num_base_bdevs_discovered": 1, 00:11:05.442 "num_base_bdevs_operational": 4, 00:11:05.442 "base_bdevs_list": [ 00:11:05.442 { 00:11:05.442 "name": "pt1", 00:11:05.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.442 "is_configured": true, 00:11:05.442 "data_offset": 2048, 00:11:05.442 "data_size": 63488 00:11:05.442 }, 00:11:05.442 { 00:11:05.442 "name": null, 00:11:05.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.442 "is_configured": false, 00:11:05.442 "data_offset": 0, 00:11:05.442 "data_size": 63488 00:11:05.442 }, 00:11:05.442 { 00:11:05.442 "name": null, 00:11:05.442 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.442 "is_configured": false, 00:11:05.442 "data_offset": 2048, 00:11:05.442 "data_size": 63488 00:11:05.442 }, 00:11:05.442 { 00:11:05.442 "name": null, 00:11:05.442 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.442 "is_configured": false, 00:11:05.442 "data_offset": 2048, 00:11:05.442 "data_size": 63488 00:11:05.442 } 00:11:05.442 ] 00:11:05.442 }' 00:11:05.442 23:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.442 23:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.701 [2024-12-06 23:45:17.158079] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:05.701 [2024-12-06 23:45:17.158275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.701 [2024-12-06 23:45:17.158317] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:05.701 [2024-12-06 23:45:17.158348] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.701 [2024-12-06 23:45:17.158919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.701 [2024-12-06 23:45:17.159012] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:05.701 [2024-12-06 23:45:17.159147] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:05.701 [2024-12-06 23:45:17.159205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.701 pt2 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.701 [2024-12-06 23:45:17.169989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:05.701 [2024-12-06 23:45:17.170083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.701 [2024-12-06 23:45:17.170120] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:05.701 [2024-12-06 23:45:17.170148] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.701 [2024-12-06 23:45:17.170615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.701 [2024-12-06 23:45:17.170691] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:05.701 [2024-12-06 23:45:17.170799] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:05.701 [2024-12-06 23:45:17.170856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:05.701 pt3 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.701 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.701 [2024-12-06 23:45:17.181929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:05.701 [2024-12-06 23:45:17.182006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.701 [2024-12-06 23:45:17.182039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:05.701 [2024-12-06 23:45:17.182062] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.701 [2024-12-06 23:45:17.182501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.701 [2024-12-06 23:45:17.182564] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:05.701 [2024-12-06 23:45:17.182654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:05.701 [2024-12-06 23:45:17.182712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:05.701 [2024-12-06 23:45:17.182857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:05.701 [2024-12-06 23:45:17.182867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:05.701 [2024-12-06 23:45:17.183142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:05.701 [2024-12-06 23:45:17.183287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:05.701 [2024-12-06 23:45:17.183301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:05.701 [2024-12-06 23:45:17.183431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.701 pt4 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.702 "name": "raid_bdev1", 00:11:05.702 "uuid": "9dff6b35-713c-441f-8870-dcc5d4c529f5", 00:11:05.702 "strip_size_kb": 64, 00:11:05.702 "state": "online", 00:11:05.702 "raid_level": "concat", 00:11:05.702 "superblock": true, 00:11:05.702 "num_base_bdevs": 4, 00:11:05.702 "num_base_bdevs_discovered": 4, 00:11:05.702 "num_base_bdevs_operational": 4, 00:11:05.702 "base_bdevs_list": [ 00:11:05.702 { 00:11:05.702 "name": "pt1", 00:11:05.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.702 "is_configured": true, 00:11:05.702 "data_offset": 2048, 00:11:05.702 "data_size": 63488 00:11:05.702 }, 00:11:05.702 { 00:11:05.702 "name": "pt2", 00:11:05.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.702 "is_configured": true, 00:11:05.702 "data_offset": 2048, 00:11:05.702 "data_size": 63488 00:11:05.702 }, 00:11:05.702 { 00:11:05.702 "name": "pt3", 00:11:05.702 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.702 "is_configured": true, 00:11:05.702 "data_offset": 2048, 00:11:05.702 "data_size": 63488 00:11:05.702 }, 00:11:05.702 { 00:11:05.702 "name": "pt4", 00:11:05.702 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.702 "is_configured": true, 00:11:05.702 "data_offset": 2048, 00:11:05.702 "data_size": 63488 00:11:05.702 } 00:11:05.702 ] 00:11:05.702 }' 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.702 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.270 [2024-12-06 23:45:17.593653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.270 "name": "raid_bdev1", 00:11:06.270 "aliases": [ 00:11:06.270 "9dff6b35-713c-441f-8870-dcc5d4c529f5" 00:11:06.270 ], 00:11:06.270 "product_name": "Raid Volume", 00:11:06.270 "block_size": 512, 00:11:06.270 "num_blocks": 253952, 00:11:06.270 "uuid": "9dff6b35-713c-441f-8870-dcc5d4c529f5", 00:11:06.270 "assigned_rate_limits": { 00:11:06.270 "rw_ios_per_sec": 0, 00:11:06.270 "rw_mbytes_per_sec": 0, 00:11:06.270 "r_mbytes_per_sec": 0, 00:11:06.270 "w_mbytes_per_sec": 0 00:11:06.270 }, 00:11:06.270 "claimed": false, 00:11:06.270 "zoned": false, 00:11:06.270 "supported_io_types": { 00:11:06.270 "read": true, 00:11:06.270 "write": true, 00:11:06.270 "unmap": true, 00:11:06.270 "flush": true, 00:11:06.270 "reset": true, 00:11:06.270 "nvme_admin": false, 00:11:06.270 "nvme_io": false, 00:11:06.270 "nvme_io_md": false, 00:11:06.270 "write_zeroes": true, 00:11:06.270 "zcopy": false, 00:11:06.270 "get_zone_info": false, 00:11:06.270 "zone_management": false, 00:11:06.270 "zone_append": false, 00:11:06.270 "compare": false, 00:11:06.270 "compare_and_write": false, 00:11:06.270 "abort": false, 00:11:06.270 "seek_hole": false, 00:11:06.270 "seek_data": false, 00:11:06.270 "copy": false, 00:11:06.270 "nvme_iov_md": false 00:11:06.270 }, 00:11:06.270 "memory_domains": [ 00:11:06.270 { 00:11:06.270 "dma_device_id": "system", 00:11:06.270 "dma_device_type": 1 00:11:06.270 }, 00:11:06.270 { 00:11:06.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.270 "dma_device_type": 2 00:11:06.270 }, 00:11:06.270 { 00:11:06.270 "dma_device_id": "system", 00:11:06.270 "dma_device_type": 1 00:11:06.270 }, 00:11:06.270 { 00:11:06.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.270 "dma_device_type": 2 00:11:06.270 }, 00:11:06.270 { 00:11:06.270 "dma_device_id": "system", 00:11:06.270 "dma_device_type": 1 00:11:06.270 }, 00:11:06.270 { 00:11:06.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.270 "dma_device_type": 2 00:11:06.270 }, 00:11:06.270 { 00:11:06.270 "dma_device_id": "system", 00:11:06.270 "dma_device_type": 1 00:11:06.270 }, 00:11:06.270 { 00:11:06.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.270 "dma_device_type": 2 00:11:06.270 } 00:11:06.270 ], 00:11:06.270 "driver_specific": { 00:11:06.270 "raid": { 00:11:06.270 "uuid": "9dff6b35-713c-441f-8870-dcc5d4c529f5", 00:11:06.270 "strip_size_kb": 64, 00:11:06.270 "state": "online", 00:11:06.270 "raid_level": "concat", 00:11:06.270 "superblock": true, 00:11:06.270 "num_base_bdevs": 4, 00:11:06.270 "num_base_bdevs_discovered": 4, 00:11:06.270 "num_base_bdevs_operational": 4, 00:11:06.270 "base_bdevs_list": [ 00:11:06.270 { 00:11:06.270 "name": "pt1", 00:11:06.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.270 "is_configured": true, 00:11:06.270 "data_offset": 2048, 00:11:06.270 "data_size": 63488 00:11:06.270 }, 00:11:06.270 { 00:11:06.270 "name": "pt2", 00:11:06.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.270 "is_configured": true, 00:11:06.270 "data_offset": 2048, 00:11:06.270 "data_size": 63488 00:11:06.270 }, 00:11:06.270 { 00:11:06.270 "name": "pt3", 00:11:06.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.270 "is_configured": true, 00:11:06.270 "data_offset": 2048, 00:11:06.270 "data_size": 63488 00:11:06.270 }, 00:11:06.270 { 00:11:06.270 "name": "pt4", 00:11:06.270 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.270 "is_configured": true, 00:11:06.270 "data_offset": 2048, 00:11:06.270 "data_size": 63488 00:11:06.270 } 00:11:06.270 ] 00:11:06.270 } 00:11:06.270 } 00:11:06.270 }' 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:06.270 pt2 00:11:06.270 pt3 00:11:06.270 pt4' 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.270 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.271 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.271 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.271 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.271 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.271 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:06.271 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.271 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.271 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:06.529 [2024-12-06 23:45:17.933077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9dff6b35-713c-441f-8870-dcc5d4c529f5 '!=' 9dff6b35-713c-441f-8870-dcc5d4c529f5 ']' 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72545 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72545 ']' 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72545 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:06.529 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.530 23:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72545 00:11:06.530 killing process with pid 72545 00:11:06.530 23:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.530 23:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.530 23:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72545' 00:11:06.530 23:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72545 00:11:06.530 [2024-12-06 23:45:18.018179] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.530 23:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72545 00:11:06.530 [2024-12-06 23:45:18.018308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.530 [2024-12-06 23:45:18.018400] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.530 [2024-12-06 23:45:18.018410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:07.097 [2024-12-06 23:45:18.447067] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.476 23:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:08.476 ************************************ 00:11:08.476 END TEST raid_superblock_test 00:11:08.476 ************************************ 00:11:08.476 00:11:08.476 real 0m5.597s 00:11:08.476 user 0m7.799s 00:11:08.476 sys 0m1.041s 00:11:08.476 23:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.476 23:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.476 23:45:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:08.476 23:45:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:08.476 23:45:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.476 23:45:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.476 ************************************ 00:11:08.476 START TEST raid_read_error_test 00:11:08.476 ************************************ 00:11:08.476 23:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:08.476 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:08.476 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:08.476 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aY39nmRWyk 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72804 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72804 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72804 ']' 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.477 23:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.477 [2024-12-06 23:45:19.854412] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:11:08.477 [2024-12-06 23:45:19.854539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72804 ] 00:11:08.477 [2024-12-06 23:45:20.021380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.736 [2024-12-06 23:45:20.161268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.041 [2024-12-06 23:45:20.399394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.041 [2024-12-06 23:45:20.399455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.310 BaseBdev1_malloc 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.310 true 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.310 [2024-12-06 23:45:20.745048] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:09.310 [2024-12-06 23:45:20.745123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.310 [2024-12-06 23:45:20.745146] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:09.310 [2024-12-06 23:45:20.745159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.310 [2024-12-06 23:45:20.747584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.310 [2024-12-06 23:45:20.747627] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:09.310 BaseBdev1 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.310 BaseBdev2_malloc 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.310 true 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:09.310 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.311 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.311 [2024-12-06 23:45:20.819495] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:09.311 [2024-12-06 23:45:20.819566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.311 [2024-12-06 23:45:20.819586] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:09.311 [2024-12-06 23:45:20.819597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.311 [2024-12-06 23:45:20.822034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.311 [2024-12-06 23:45:20.822074] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:09.311 BaseBdev2 00:11:09.311 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.311 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.311 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:09.311 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.311 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.571 BaseBdev3_malloc 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.572 true 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.572 [2024-12-06 23:45:20.905970] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:09.572 [2024-12-06 23:45:20.906039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.572 [2024-12-06 23:45:20.906060] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:09.572 [2024-12-06 23:45:20.906072] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.572 [2024-12-06 23:45:20.908525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.572 [2024-12-06 23:45:20.908566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:09.572 BaseBdev3 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.572 BaseBdev4_malloc 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.572 true 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.572 [2024-12-06 23:45:20.981427] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:09.572 [2024-12-06 23:45:20.981496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.572 [2024-12-06 23:45:20.981516] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:09.572 [2024-12-06 23:45:20.981529] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.572 [2024-12-06 23:45:20.984006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.572 [2024-12-06 23:45:20.984050] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:09.572 BaseBdev4 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.572 [2024-12-06 23:45:20.993487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.572 [2024-12-06 23:45:20.995615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.572 [2024-12-06 23:45:20.995818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.572 [2024-12-06 23:45:20.995901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.572 [2024-12-06 23:45:20.996172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:09.572 [2024-12-06 23:45:20.996190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:09.572 [2024-12-06 23:45:20.996470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:09.572 [2024-12-06 23:45:20.996628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:09.572 [2024-12-06 23:45:20.996639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:09.572 [2024-12-06 23:45:20.996831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.572 23:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.572 23:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.572 23:45:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.572 23:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.572 23:45:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.572 23:45:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.572 23:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.572 "name": "raid_bdev1", 00:11:09.572 "uuid": "3a2d769b-2dbc-409b-a56b-e0b2871cbc46", 00:11:09.572 "strip_size_kb": 64, 00:11:09.572 "state": "online", 00:11:09.572 "raid_level": "concat", 00:11:09.572 "superblock": true, 00:11:09.572 "num_base_bdevs": 4, 00:11:09.572 "num_base_bdevs_discovered": 4, 00:11:09.572 "num_base_bdevs_operational": 4, 00:11:09.572 "base_bdevs_list": [ 00:11:09.572 { 00:11:09.572 "name": "BaseBdev1", 00:11:09.572 "uuid": "33ec4be7-050d-51e1-8aeb-60ada0d8857c", 00:11:09.572 "is_configured": true, 00:11:09.572 "data_offset": 2048, 00:11:09.572 "data_size": 63488 00:11:09.572 }, 00:11:09.572 { 00:11:09.572 "name": "BaseBdev2", 00:11:09.572 "uuid": "f20bda74-26f0-5023-a0a7-9b090336b349", 00:11:09.572 "is_configured": true, 00:11:09.572 "data_offset": 2048, 00:11:09.572 "data_size": 63488 00:11:09.572 }, 00:11:09.572 { 00:11:09.572 "name": "BaseBdev3", 00:11:09.572 "uuid": "ae18d9a3-b0ba-582e-94e2-62b50a915d2e", 00:11:09.572 "is_configured": true, 00:11:09.572 "data_offset": 2048, 00:11:09.572 "data_size": 63488 00:11:09.572 }, 00:11:09.572 { 00:11:09.572 "name": "BaseBdev4", 00:11:09.572 "uuid": "baa235ef-9ecf-5708-9bbe-d29c577389ba", 00:11:09.572 "is_configured": true, 00:11:09.572 "data_offset": 2048, 00:11:09.572 "data_size": 63488 00:11:09.572 } 00:11:09.572 ] 00:11:09.572 }' 00:11:09.572 23:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.572 23:45:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.141 23:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:10.141 23:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:10.141 [2024-12-06 23:45:21.522136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.081 "name": "raid_bdev1", 00:11:11.081 "uuid": "3a2d769b-2dbc-409b-a56b-e0b2871cbc46", 00:11:11.081 "strip_size_kb": 64, 00:11:11.081 "state": "online", 00:11:11.081 "raid_level": "concat", 00:11:11.081 "superblock": true, 00:11:11.081 "num_base_bdevs": 4, 00:11:11.081 "num_base_bdevs_discovered": 4, 00:11:11.081 "num_base_bdevs_operational": 4, 00:11:11.081 "base_bdevs_list": [ 00:11:11.081 { 00:11:11.081 "name": "BaseBdev1", 00:11:11.081 "uuid": "33ec4be7-050d-51e1-8aeb-60ada0d8857c", 00:11:11.081 "is_configured": true, 00:11:11.081 "data_offset": 2048, 00:11:11.081 "data_size": 63488 00:11:11.081 }, 00:11:11.081 { 00:11:11.081 "name": "BaseBdev2", 00:11:11.081 "uuid": "f20bda74-26f0-5023-a0a7-9b090336b349", 00:11:11.081 "is_configured": true, 00:11:11.081 "data_offset": 2048, 00:11:11.081 "data_size": 63488 00:11:11.081 }, 00:11:11.081 { 00:11:11.081 "name": "BaseBdev3", 00:11:11.081 "uuid": "ae18d9a3-b0ba-582e-94e2-62b50a915d2e", 00:11:11.081 "is_configured": true, 00:11:11.081 "data_offset": 2048, 00:11:11.081 "data_size": 63488 00:11:11.081 }, 00:11:11.081 { 00:11:11.081 "name": "BaseBdev4", 00:11:11.081 "uuid": "baa235ef-9ecf-5708-9bbe-d29c577389ba", 00:11:11.081 "is_configured": true, 00:11:11.081 "data_offset": 2048, 00:11:11.081 "data_size": 63488 00:11:11.081 } 00:11:11.081 ] 00:11:11.081 }' 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.081 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.342 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:11.342 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.342 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.342 [2024-12-06 23:45:22.871021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.342 [2024-12-06 23:45:22.871180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.342 [2024-12-06 23:45:22.873963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.342 [2024-12-06 23:45:22.874032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.342 [2024-12-06 23:45:22.874078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.342 [2024-12-06 23:45:22.874091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:11.342 { 00:11:11.342 "results": [ 00:11:11.342 { 00:11:11.342 "job": "raid_bdev1", 00:11:11.342 "core_mask": "0x1", 00:11:11.342 "workload": "randrw", 00:11:11.342 "percentage": 50, 00:11:11.342 "status": "finished", 00:11:11.342 "queue_depth": 1, 00:11:11.342 "io_size": 131072, 00:11:11.342 "runtime": 1.349412, 00:11:11.342 "iops": 13282.822444146042, 00:11:11.342 "mibps": 1660.3528055182553, 00:11:11.342 "io_failed": 1, 00:11:11.342 "io_timeout": 0, 00:11:11.342 "avg_latency_us": 105.92962028831923, 00:11:11.342 "min_latency_us": 26.270742358078603, 00:11:11.342 "max_latency_us": 1409.4532751091704 00:11:11.342 } 00:11:11.342 ], 00:11:11.342 "core_count": 1 00:11:11.342 } 00:11:11.342 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.342 23:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72804 00:11:11.342 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72804 ']' 00:11:11.342 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72804 00:11:11.342 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:11.342 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.342 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72804 00:11:11.602 killing process with pid 72804 00:11:11.602 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.602 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.602 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72804' 00:11:11.602 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72804 00:11:11.602 [2024-12-06 23:45:22.916097] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.602 23:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72804 00:11:11.862 [2024-12-06 23:45:23.275750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.245 23:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:13.245 23:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aY39nmRWyk 00:11:13.245 23:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:13.245 23:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:13.245 23:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:13.245 23:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:13.245 23:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:13.245 23:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:13.245 00:11:13.245 real 0m4.838s 00:11:13.245 user 0m5.539s 00:11:13.245 sys 0m0.686s 00:11:13.245 23:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.245 23:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.245 ************************************ 00:11:13.245 END TEST raid_read_error_test 00:11:13.245 ************************************ 00:11:13.245 23:45:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:13.245 23:45:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:13.245 23:45:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.245 23:45:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.245 ************************************ 00:11:13.245 START TEST raid_write_error_test 00:11:13.245 ************************************ 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.whoMq2tO70 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72955 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72955 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72955 ']' 00:11:13.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.245 23:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.245 [2024-12-06 23:45:24.754512] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:11:13.245 [2024-12-06 23:45:24.754618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72955 ] 00:11:13.505 [2024-12-06 23:45:24.928657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.763 [2024-12-06 23:45:25.068206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.763 [2024-12-06 23:45:25.305850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.763 [2024-12-06 23:45:25.305905] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.022 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.022 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:14.022 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.022 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:14.022 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.022 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.282 BaseBdev1_malloc 00:11:14.282 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.282 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:14.282 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.282 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.282 true 00:11:14.282 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.282 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:14.282 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.282 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.282 [2024-12-06 23:45:25.645582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:14.282 [2024-12-06 23:45:25.645781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.282 [2024-12-06 23:45:25.645815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:14.283 [2024-12-06 23:45:25.645829] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.283 [2024-12-06 23:45:25.648457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.283 [2024-12-06 23:45:25.648507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:14.283 BaseBdev1 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.283 BaseBdev2_malloc 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.283 true 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.283 [2024-12-06 23:45:25.717513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:14.283 [2024-12-06 23:45:25.717680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.283 [2024-12-06 23:45:25.717705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:14.283 [2024-12-06 23:45:25.717717] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.283 [2024-12-06 23:45:25.720192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.283 [2024-12-06 23:45:25.720234] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:14.283 BaseBdev2 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.283 BaseBdev3_malloc 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.283 true 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.283 [2024-12-06 23:45:25.803525] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:14.283 [2024-12-06 23:45:25.803706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.283 [2024-12-06 23:45:25.803740] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:14.283 [2024-12-06 23:45:25.803755] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.283 [2024-12-06 23:45:25.806347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.283 [2024-12-06 23:45:25.806388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:14.283 BaseBdev3 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.283 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.544 BaseBdev4_malloc 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.544 true 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.544 [2024-12-06 23:45:25.878220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:14.544 [2024-12-06 23:45:25.878286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.544 [2024-12-06 23:45:25.878306] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:14.544 [2024-12-06 23:45:25.878318] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.544 [2024-12-06 23:45:25.880704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.544 [2024-12-06 23:45:25.880829] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:14.544 BaseBdev4 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.544 [2024-12-06 23:45:25.890271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.544 [2024-12-06 23:45:25.892427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.544 [2024-12-06 23:45:25.892506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.544 [2024-12-06 23:45:25.892568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:14.544 [2024-12-06 23:45:25.892809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:14.544 [2024-12-06 23:45:25.892827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:14.544 [2024-12-06 23:45:25.893077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:14.544 [2024-12-06 23:45:25.893256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:14.544 [2024-12-06 23:45:25.893276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:14.544 [2024-12-06 23:45:25.893430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.544 "name": "raid_bdev1", 00:11:14.544 "uuid": "a10c66be-38ca-461e-aaa4-69378ecefd91", 00:11:14.544 "strip_size_kb": 64, 00:11:14.544 "state": "online", 00:11:14.544 "raid_level": "concat", 00:11:14.544 "superblock": true, 00:11:14.544 "num_base_bdevs": 4, 00:11:14.544 "num_base_bdevs_discovered": 4, 00:11:14.544 "num_base_bdevs_operational": 4, 00:11:14.544 "base_bdevs_list": [ 00:11:14.544 { 00:11:14.544 "name": "BaseBdev1", 00:11:14.544 "uuid": "1fc92e12-1ac8-5eca-b644-48624c9e6903", 00:11:14.544 "is_configured": true, 00:11:14.544 "data_offset": 2048, 00:11:14.544 "data_size": 63488 00:11:14.544 }, 00:11:14.544 { 00:11:14.544 "name": "BaseBdev2", 00:11:14.544 "uuid": "d8890872-00d3-500a-9d0c-4f8325ccdca9", 00:11:14.544 "is_configured": true, 00:11:14.544 "data_offset": 2048, 00:11:14.544 "data_size": 63488 00:11:14.544 }, 00:11:14.544 { 00:11:14.544 "name": "BaseBdev3", 00:11:14.544 "uuid": "35bcf7a8-92e0-51d2-9ae2-2c9015ad0657", 00:11:14.544 "is_configured": true, 00:11:14.544 "data_offset": 2048, 00:11:14.544 "data_size": 63488 00:11:14.544 }, 00:11:14.544 { 00:11:14.544 "name": "BaseBdev4", 00:11:14.544 "uuid": "7e60fa27-0fd4-5618-9290-060505c30380", 00:11:14.544 "is_configured": true, 00:11:14.544 "data_offset": 2048, 00:11:14.544 "data_size": 63488 00:11:14.544 } 00:11:14.544 ] 00:11:14.544 }' 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.544 23:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.804 23:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:14.804 23:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:15.063 [2024-12-06 23:45:26.382958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.001 "name": "raid_bdev1", 00:11:16.001 "uuid": "a10c66be-38ca-461e-aaa4-69378ecefd91", 00:11:16.001 "strip_size_kb": 64, 00:11:16.001 "state": "online", 00:11:16.001 "raid_level": "concat", 00:11:16.001 "superblock": true, 00:11:16.001 "num_base_bdevs": 4, 00:11:16.001 "num_base_bdevs_discovered": 4, 00:11:16.001 "num_base_bdevs_operational": 4, 00:11:16.001 "base_bdevs_list": [ 00:11:16.001 { 00:11:16.001 "name": "BaseBdev1", 00:11:16.001 "uuid": "1fc92e12-1ac8-5eca-b644-48624c9e6903", 00:11:16.001 "is_configured": true, 00:11:16.001 "data_offset": 2048, 00:11:16.001 "data_size": 63488 00:11:16.001 }, 00:11:16.001 { 00:11:16.001 "name": "BaseBdev2", 00:11:16.001 "uuid": "d8890872-00d3-500a-9d0c-4f8325ccdca9", 00:11:16.001 "is_configured": true, 00:11:16.001 "data_offset": 2048, 00:11:16.001 "data_size": 63488 00:11:16.001 }, 00:11:16.001 { 00:11:16.001 "name": "BaseBdev3", 00:11:16.001 "uuid": "35bcf7a8-92e0-51d2-9ae2-2c9015ad0657", 00:11:16.001 "is_configured": true, 00:11:16.001 "data_offset": 2048, 00:11:16.001 "data_size": 63488 00:11:16.001 }, 00:11:16.001 { 00:11:16.001 "name": "BaseBdev4", 00:11:16.001 "uuid": "7e60fa27-0fd4-5618-9290-060505c30380", 00:11:16.001 "is_configured": true, 00:11:16.001 "data_offset": 2048, 00:11:16.001 "data_size": 63488 00:11:16.001 } 00:11:16.001 ] 00:11:16.001 }' 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.001 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.260 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:16.260 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.260 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.260 [2024-12-06 23:45:27.752013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.260 [2024-12-06 23:45:27.752068] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.260 [2024-12-06 23:45:27.754712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.260 [2024-12-06 23:45:27.754781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.260 [2024-12-06 23:45:27.754829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.260 [2024-12-06 23:45:27.754846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:16.260 { 00:11:16.260 "results": [ 00:11:16.260 { 00:11:16.260 "job": "raid_bdev1", 00:11:16.260 "core_mask": "0x1", 00:11:16.260 "workload": "randrw", 00:11:16.260 "percentage": 50, 00:11:16.260 "status": "finished", 00:11:16.260 "queue_depth": 1, 00:11:16.260 "io_size": 131072, 00:11:16.260 "runtime": 1.369566, 00:11:16.260 "iops": 13456.817707215278, 00:11:16.260 "mibps": 1682.1022134019097, 00:11:16.260 "io_failed": 1, 00:11:16.260 "io_timeout": 0, 00:11:16.260 "avg_latency_us": 104.56502527188032, 00:11:16.260 "min_latency_us": 25.4882096069869, 00:11:16.260 "max_latency_us": 1402.2986899563318 00:11:16.260 } 00:11:16.260 ], 00:11:16.260 "core_count": 1 00:11:16.260 } 00:11:16.260 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.260 23:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72955 00:11:16.260 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72955 ']' 00:11:16.260 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72955 00:11:16.260 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:16.260 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.261 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72955 00:11:16.261 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.261 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.261 killing process with pid 72955 00:11:16.261 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72955' 00:11:16.261 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72955 00:11:16.261 [2024-12-06 23:45:27.801473] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.261 23:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72955 00:11:16.827 [2024-12-06 23:45:28.159403] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.204 23:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.whoMq2tO70 00:11:18.204 23:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:18.204 23:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:18.204 23:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:18.204 23:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:18.204 23:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:18.204 23:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:18.204 ************************************ 00:11:18.204 END TEST raid_write_error_test 00:11:18.204 ************************************ 00:11:18.204 23:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:18.204 00:11:18.204 real 0m4.816s 00:11:18.204 user 0m5.506s 00:11:18.204 sys 0m0.670s 00:11:18.204 23:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.204 23:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.204 23:45:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:18.204 23:45:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:18.204 23:45:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:18.204 23:45:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.204 23:45:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.204 ************************************ 00:11:18.204 START TEST raid_state_function_test 00:11:18.204 ************************************ 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:18.204 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:18.205 Process raid pid: 73100 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73100 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73100' 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73100 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73100 ']' 00:11:18.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.205 23:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.205 [2024-12-06 23:45:29.638473] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:11:18.205 [2024-12-06 23:45:29.638593] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.464 [2024-12-06 23:45:29.812547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.464 [2024-12-06 23:45:29.948781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.723 [2024-12-06 23:45:30.190986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.723 [2024-12-06 23:45:30.191030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.981 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.981 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:18.981 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.981 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.981 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.981 [2024-12-06 23:45:30.467687] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.981 [2024-12-06 23:45:30.467759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.981 [2024-12-06 23:45:30.467776] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.981 [2024-12-06 23:45:30.467786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.981 [2024-12-06 23:45:30.467793] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.981 [2024-12-06 23:45:30.467803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.981 [2024-12-06 23:45:30.467808] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:18.982 [2024-12-06 23:45:30.467817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.982 "name": "Existed_Raid", 00:11:18.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.982 "strip_size_kb": 0, 00:11:18.982 "state": "configuring", 00:11:18.982 "raid_level": "raid1", 00:11:18.982 "superblock": false, 00:11:18.982 "num_base_bdevs": 4, 00:11:18.982 "num_base_bdevs_discovered": 0, 00:11:18.982 "num_base_bdevs_operational": 4, 00:11:18.982 "base_bdevs_list": [ 00:11:18.982 { 00:11:18.982 "name": "BaseBdev1", 00:11:18.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.982 "is_configured": false, 00:11:18.982 "data_offset": 0, 00:11:18.982 "data_size": 0 00:11:18.982 }, 00:11:18.982 { 00:11:18.982 "name": "BaseBdev2", 00:11:18.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.982 "is_configured": false, 00:11:18.982 "data_offset": 0, 00:11:18.982 "data_size": 0 00:11:18.982 }, 00:11:18.982 { 00:11:18.982 "name": "BaseBdev3", 00:11:18.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.982 "is_configured": false, 00:11:18.982 "data_offset": 0, 00:11:18.982 "data_size": 0 00:11:18.982 }, 00:11:18.982 { 00:11:18.982 "name": "BaseBdev4", 00:11:18.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.982 "is_configured": false, 00:11:18.982 "data_offset": 0, 00:11:18.982 "data_size": 0 00:11:18.982 } 00:11:18.982 ] 00:11:18.982 }' 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.982 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.550 [2024-12-06 23:45:30.894960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.550 [2024-12-06 23:45:30.895127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.550 [2024-12-06 23:45:30.902877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:19.550 [2024-12-06 23:45:30.902970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:19.550 [2024-12-06 23:45:30.903015] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.550 [2024-12-06 23:45:30.903039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.550 [2024-12-06 23:45:30.903057] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:19.550 [2024-12-06 23:45:30.903078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:19.550 [2024-12-06 23:45:30.903095] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:19.550 [2024-12-06 23:45:30.903117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.550 [2024-12-06 23:45:30.953877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.550 BaseBdev1 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.550 [ 00:11:19.550 { 00:11:19.550 "name": "BaseBdev1", 00:11:19.550 "aliases": [ 00:11:19.550 "2fdd1bba-2f01-4d0c-80fa-45c25dbaa0d2" 00:11:19.550 ], 00:11:19.550 "product_name": "Malloc disk", 00:11:19.550 "block_size": 512, 00:11:19.550 "num_blocks": 65536, 00:11:19.550 "uuid": "2fdd1bba-2f01-4d0c-80fa-45c25dbaa0d2", 00:11:19.550 "assigned_rate_limits": { 00:11:19.550 "rw_ios_per_sec": 0, 00:11:19.550 "rw_mbytes_per_sec": 0, 00:11:19.550 "r_mbytes_per_sec": 0, 00:11:19.550 "w_mbytes_per_sec": 0 00:11:19.550 }, 00:11:19.550 "claimed": true, 00:11:19.550 "claim_type": "exclusive_write", 00:11:19.550 "zoned": false, 00:11:19.550 "supported_io_types": { 00:11:19.550 "read": true, 00:11:19.550 "write": true, 00:11:19.550 "unmap": true, 00:11:19.550 "flush": true, 00:11:19.550 "reset": true, 00:11:19.550 "nvme_admin": false, 00:11:19.550 "nvme_io": false, 00:11:19.550 "nvme_io_md": false, 00:11:19.550 "write_zeroes": true, 00:11:19.550 "zcopy": true, 00:11:19.550 "get_zone_info": false, 00:11:19.550 "zone_management": false, 00:11:19.550 "zone_append": false, 00:11:19.550 "compare": false, 00:11:19.550 "compare_and_write": false, 00:11:19.550 "abort": true, 00:11:19.550 "seek_hole": false, 00:11:19.550 "seek_data": false, 00:11:19.550 "copy": true, 00:11:19.550 "nvme_iov_md": false 00:11:19.550 }, 00:11:19.550 "memory_domains": [ 00:11:19.550 { 00:11:19.550 "dma_device_id": "system", 00:11:19.550 "dma_device_type": 1 00:11:19.550 }, 00:11:19.550 { 00:11:19.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.550 "dma_device_type": 2 00:11:19.550 } 00:11:19.550 ], 00:11:19.550 "driver_specific": {} 00:11:19.550 } 00:11:19.550 ] 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.550 23:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.550 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.550 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.550 "name": "Existed_Raid", 00:11:19.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.550 "strip_size_kb": 0, 00:11:19.550 "state": "configuring", 00:11:19.550 "raid_level": "raid1", 00:11:19.550 "superblock": false, 00:11:19.550 "num_base_bdevs": 4, 00:11:19.550 "num_base_bdevs_discovered": 1, 00:11:19.550 "num_base_bdevs_operational": 4, 00:11:19.550 "base_bdevs_list": [ 00:11:19.550 { 00:11:19.550 "name": "BaseBdev1", 00:11:19.550 "uuid": "2fdd1bba-2f01-4d0c-80fa-45c25dbaa0d2", 00:11:19.550 "is_configured": true, 00:11:19.550 "data_offset": 0, 00:11:19.550 "data_size": 65536 00:11:19.550 }, 00:11:19.550 { 00:11:19.550 "name": "BaseBdev2", 00:11:19.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.551 "is_configured": false, 00:11:19.551 "data_offset": 0, 00:11:19.551 "data_size": 0 00:11:19.551 }, 00:11:19.551 { 00:11:19.551 "name": "BaseBdev3", 00:11:19.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.551 "is_configured": false, 00:11:19.551 "data_offset": 0, 00:11:19.551 "data_size": 0 00:11:19.551 }, 00:11:19.551 { 00:11:19.551 "name": "BaseBdev4", 00:11:19.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.551 "is_configured": false, 00:11:19.551 "data_offset": 0, 00:11:19.551 "data_size": 0 00:11:19.551 } 00:11:19.551 ] 00:11:19.551 }' 00:11:19.551 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.551 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.117 [2024-12-06 23:45:31.433177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.117 [2024-12-06 23:45:31.433260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.117 [2024-12-06 23:45:31.445184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.117 [2024-12-06 23:45:31.447347] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.117 [2024-12-06 23:45:31.447431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.117 [2024-12-06 23:45:31.447459] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:20.117 [2024-12-06 23:45:31.447484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:20.117 [2024-12-06 23:45:31.447502] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:20.117 [2024-12-06 23:45:31.447523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.117 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.117 "name": "Existed_Raid", 00:11:20.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.117 "strip_size_kb": 0, 00:11:20.118 "state": "configuring", 00:11:20.118 "raid_level": "raid1", 00:11:20.118 "superblock": false, 00:11:20.118 "num_base_bdevs": 4, 00:11:20.118 "num_base_bdevs_discovered": 1, 00:11:20.118 "num_base_bdevs_operational": 4, 00:11:20.118 "base_bdevs_list": [ 00:11:20.118 { 00:11:20.118 "name": "BaseBdev1", 00:11:20.118 "uuid": "2fdd1bba-2f01-4d0c-80fa-45c25dbaa0d2", 00:11:20.118 "is_configured": true, 00:11:20.118 "data_offset": 0, 00:11:20.118 "data_size": 65536 00:11:20.118 }, 00:11:20.118 { 00:11:20.118 "name": "BaseBdev2", 00:11:20.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.118 "is_configured": false, 00:11:20.118 "data_offset": 0, 00:11:20.118 "data_size": 0 00:11:20.118 }, 00:11:20.118 { 00:11:20.118 "name": "BaseBdev3", 00:11:20.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.118 "is_configured": false, 00:11:20.118 "data_offset": 0, 00:11:20.118 "data_size": 0 00:11:20.118 }, 00:11:20.118 { 00:11:20.118 "name": "BaseBdev4", 00:11:20.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.118 "is_configured": false, 00:11:20.118 "data_offset": 0, 00:11:20.118 "data_size": 0 00:11:20.118 } 00:11:20.118 ] 00:11:20.118 }' 00:11:20.118 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.118 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.391 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:20.391 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.391 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.667 [2024-12-06 23:45:31.962125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.667 BaseBdev2 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.667 [ 00:11:20.667 { 00:11:20.667 "name": "BaseBdev2", 00:11:20.667 "aliases": [ 00:11:20.667 "9c714bb7-2085-47d6-bde3-781b7b587eb9" 00:11:20.667 ], 00:11:20.667 "product_name": "Malloc disk", 00:11:20.667 "block_size": 512, 00:11:20.667 "num_blocks": 65536, 00:11:20.667 "uuid": "9c714bb7-2085-47d6-bde3-781b7b587eb9", 00:11:20.667 "assigned_rate_limits": { 00:11:20.667 "rw_ios_per_sec": 0, 00:11:20.667 "rw_mbytes_per_sec": 0, 00:11:20.667 "r_mbytes_per_sec": 0, 00:11:20.667 "w_mbytes_per_sec": 0 00:11:20.667 }, 00:11:20.667 "claimed": true, 00:11:20.667 "claim_type": "exclusive_write", 00:11:20.667 "zoned": false, 00:11:20.667 "supported_io_types": { 00:11:20.667 "read": true, 00:11:20.667 "write": true, 00:11:20.667 "unmap": true, 00:11:20.667 "flush": true, 00:11:20.667 "reset": true, 00:11:20.667 "nvme_admin": false, 00:11:20.667 "nvme_io": false, 00:11:20.667 "nvme_io_md": false, 00:11:20.667 "write_zeroes": true, 00:11:20.667 "zcopy": true, 00:11:20.667 "get_zone_info": false, 00:11:20.667 "zone_management": false, 00:11:20.667 "zone_append": false, 00:11:20.667 "compare": false, 00:11:20.667 "compare_and_write": false, 00:11:20.667 "abort": true, 00:11:20.667 "seek_hole": false, 00:11:20.667 "seek_data": false, 00:11:20.667 "copy": true, 00:11:20.667 "nvme_iov_md": false 00:11:20.667 }, 00:11:20.667 "memory_domains": [ 00:11:20.667 { 00:11:20.667 "dma_device_id": "system", 00:11:20.667 "dma_device_type": 1 00:11:20.667 }, 00:11:20.667 { 00:11:20.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.667 "dma_device_type": 2 00:11:20.667 } 00:11:20.667 ], 00:11:20.667 "driver_specific": {} 00:11:20.667 } 00:11:20.667 ] 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.667 23:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.667 "name": "Existed_Raid", 00:11:20.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.667 "strip_size_kb": 0, 00:11:20.667 "state": "configuring", 00:11:20.667 "raid_level": "raid1", 00:11:20.667 "superblock": false, 00:11:20.667 "num_base_bdevs": 4, 00:11:20.667 "num_base_bdevs_discovered": 2, 00:11:20.667 "num_base_bdevs_operational": 4, 00:11:20.667 "base_bdevs_list": [ 00:11:20.667 { 00:11:20.667 "name": "BaseBdev1", 00:11:20.667 "uuid": "2fdd1bba-2f01-4d0c-80fa-45c25dbaa0d2", 00:11:20.667 "is_configured": true, 00:11:20.667 "data_offset": 0, 00:11:20.667 "data_size": 65536 00:11:20.667 }, 00:11:20.667 { 00:11:20.667 "name": "BaseBdev2", 00:11:20.667 "uuid": "9c714bb7-2085-47d6-bde3-781b7b587eb9", 00:11:20.667 "is_configured": true, 00:11:20.667 "data_offset": 0, 00:11:20.667 "data_size": 65536 00:11:20.667 }, 00:11:20.667 { 00:11:20.667 "name": "BaseBdev3", 00:11:20.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.667 "is_configured": false, 00:11:20.667 "data_offset": 0, 00:11:20.667 "data_size": 0 00:11:20.667 }, 00:11:20.667 { 00:11:20.667 "name": "BaseBdev4", 00:11:20.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.667 "is_configured": false, 00:11:20.667 "data_offset": 0, 00:11:20.667 "data_size": 0 00:11:20.667 } 00:11:20.667 ] 00:11:20.667 }' 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.667 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.926 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:20.926 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.926 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.185 [2024-12-06 23:45:32.534990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.185 BaseBdev3 00:11:21.185 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.185 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:21.185 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:21.185 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.186 [ 00:11:21.186 { 00:11:21.186 "name": "BaseBdev3", 00:11:21.186 "aliases": [ 00:11:21.186 "1e6bdc66-5dec-4048-a63c-5783e7f4de14" 00:11:21.186 ], 00:11:21.186 "product_name": "Malloc disk", 00:11:21.186 "block_size": 512, 00:11:21.186 "num_blocks": 65536, 00:11:21.186 "uuid": "1e6bdc66-5dec-4048-a63c-5783e7f4de14", 00:11:21.186 "assigned_rate_limits": { 00:11:21.186 "rw_ios_per_sec": 0, 00:11:21.186 "rw_mbytes_per_sec": 0, 00:11:21.186 "r_mbytes_per_sec": 0, 00:11:21.186 "w_mbytes_per_sec": 0 00:11:21.186 }, 00:11:21.186 "claimed": true, 00:11:21.186 "claim_type": "exclusive_write", 00:11:21.186 "zoned": false, 00:11:21.186 "supported_io_types": { 00:11:21.186 "read": true, 00:11:21.186 "write": true, 00:11:21.186 "unmap": true, 00:11:21.186 "flush": true, 00:11:21.186 "reset": true, 00:11:21.186 "nvme_admin": false, 00:11:21.186 "nvme_io": false, 00:11:21.186 "nvme_io_md": false, 00:11:21.186 "write_zeroes": true, 00:11:21.186 "zcopy": true, 00:11:21.186 "get_zone_info": false, 00:11:21.186 "zone_management": false, 00:11:21.186 "zone_append": false, 00:11:21.186 "compare": false, 00:11:21.186 "compare_and_write": false, 00:11:21.186 "abort": true, 00:11:21.186 "seek_hole": false, 00:11:21.186 "seek_data": false, 00:11:21.186 "copy": true, 00:11:21.186 "nvme_iov_md": false 00:11:21.186 }, 00:11:21.186 "memory_domains": [ 00:11:21.186 { 00:11:21.186 "dma_device_id": "system", 00:11:21.186 "dma_device_type": 1 00:11:21.186 }, 00:11:21.186 { 00:11:21.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.186 "dma_device_type": 2 00:11:21.186 } 00:11:21.186 ], 00:11:21.186 "driver_specific": {} 00:11:21.186 } 00:11:21.186 ] 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.186 "name": "Existed_Raid", 00:11:21.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.186 "strip_size_kb": 0, 00:11:21.186 "state": "configuring", 00:11:21.186 "raid_level": "raid1", 00:11:21.186 "superblock": false, 00:11:21.186 "num_base_bdevs": 4, 00:11:21.186 "num_base_bdevs_discovered": 3, 00:11:21.186 "num_base_bdevs_operational": 4, 00:11:21.186 "base_bdevs_list": [ 00:11:21.186 { 00:11:21.186 "name": "BaseBdev1", 00:11:21.186 "uuid": "2fdd1bba-2f01-4d0c-80fa-45c25dbaa0d2", 00:11:21.186 "is_configured": true, 00:11:21.186 "data_offset": 0, 00:11:21.186 "data_size": 65536 00:11:21.186 }, 00:11:21.186 { 00:11:21.186 "name": "BaseBdev2", 00:11:21.186 "uuid": "9c714bb7-2085-47d6-bde3-781b7b587eb9", 00:11:21.186 "is_configured": true, 00:11:21.186 "data_offset": 0, 00:11:21.186 "data_size": 65536 00:11:21.186 }, 00:11:21.186 { 00:11:21.186 "name": "BaseBdev3", 00:11:21.186 "uuid": "1e6bdc66-5dec-4048-a63c-5783e7f4de14", 00:11:21.186 "is_configured": true, 00:11:21.186 "data_offset": 0, 00:11:21.186 "data_size": 65536 00:11:21.186 }, 00:11:21.186 { 00:11:21.186 "name": "BaseBdev4", 00:11:21.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.186 "is_configured": false, 00:11:21.186 "data_offset": 0, 00:11:21.186 "data_size": 0 00:11:21.186 } 00:11:21.186 ] 00:11:21.186 }' 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.186 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.446 23:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:21.446 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.446 23:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.706 [2024-12-06 23:45:33.023121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:21.706 [2024-12-06 23:45:33.023190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:21.706 [2024-12-06 23:45:33.023200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:21.706 [2024-12-06 23:45:33.023508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:21.706 [2024-12-06 23:45:33.023738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:21.706 [2024-12-06 23:45:33.023757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:21.706 [2024-12-06 23:45:33.024044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.706 BaseBdev4 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.706 [ 00:11:21.706 { 00:11:21.706 "name": "BaseBdev4", 00:11:21.706 "aliases": [ 00:11:21.706 "9f6f8b6f-2ce0-4b6f-ae5e-cae94955680f" 00:11:21.706 ], 00:11:21.706 "product_name": "Malloc disk", 00:11:21.706 "block_size": 512, 00:11:21.706 "num_blocks": 65536, 00:11:21.706 "uuid": "9f6f8b6f-2ce0-4b6f-ae5e-cae94955680f", 00:11:21.706 "assigned_rate_limits": { 00:11:21.706 "rw_ios_per_sec": 0, 00:11:21.706 "rw_mbytes_per_sec": 0, 00:11:21.706 "r_mbytes_per_sec": 0, 00:11:21.706 "w_mbytes_per_sec": 0 00:11:21.706 }, 00:11:21.706 "claimed": true, 00:11:21.706 "claim_type": "exclusive_write", 00:11:21.706 "zoned": false, 00:11:21.706 "supported_io_types": { 00:11:21.706 "read": true, 00:11:21.706 "write": true, 00:11:21.706 "unmap": true, 00:11:21.706 "flush": true, 00:11:21.706 "reset": true, 00:11:21.706 "nvme_admin": false, 00:11:21.706 "nvme_io": false, 00:11:21.706 "nvme_io_md": false, 00:11:21.706 "write_zeroes": true, 00:11:21.706 "zcopy": true, 00:11:21.706 "get_zone_info": false, 00:11:21.706 "zone_management": false, 00:11:21.706 "zone_append": false, 00:11:21.706 "compare": false, 00:11:21.706 "compare_and_write": false, 00:11:21.706 "abort": true, 00:11:21.706 "seek_hole": false, 00:11:21.706 "seek_data": false, 00:11:21.706 "copy": true, 00:11:21.706 "nvme_iov_md": false 00:11:21.706 }, 00:11:21.706 "memory_domains": [ 00:11:21.706 { 00:11:21.706 "dma_device_id": "system", 00:11:21.706 "dma_device_type": 1 00:11:21.706 }, 00:11:21.706 { 00:11:21.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.706 "dma_device_type": 2 00:11:21.706 } 00:11:21.706 ], 00:11:21.706 "driver_specific": {} 00:11:21.706 } 00:11:21.706 ] 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.706 "name": "Existed_Raid", 00:11:21.706 "uuid": "a90ed917-8d5d-4526-9c38-4489cc87224d", 00:11:21.706 "strip_size_kb": 0, 00:11:21.706 "state": "online", 00:11:21.706 "raid_level": "raid1", 00:11:21.706 "superblock": false, 00:11:21.706 "num_base_bdevs": 4, 00:11:21.706 "num_base_bdevs_discovered": 4, 00:11:21.706 "num_base_bdevs_operational": 4, 00:11:21.706 "base_bdevs_list": [ 00:11:21.706 { 00:11:21.706 "name": "BaseBdev1", 00:11:21.706 "uuid": "2fdd1bba-2f01-4d0c-80fa-45c25dbaa0d2", 00:11:21.706 "is_configured": true, 00:11:21.706 "data_offset": 0, 00:11:21.706 "data_size": 65536 00:11:21.706 }, 00:11:21.706 { 00:11:21.706 "name": "BaseBdev2", 00:11:21.706 "uuid": "9c714bb7-2085-47d6-bde3-781b7b587eb9", 00:11:21.706 "is_configured": true, 00:11:21.706 "data_offset": 0, 00:11:21.706 "data_size": 65536 00:11:21.706 }, 00:11:21.706 { 00:11:21.706 "name": "BaseBdev3", 00:11:21.706 "uuid": "1e6bdc66-5dec-4048-a63c-5783e7f4de14", 00:11:21.706 "is_configured": true, 00:11:21.706 "data_offset": 0, 00:11:21.706 "data_size": 65536 00:11:21.706 }, 00:11:21.706 { 00:11:21.706 "name": "BaseBdev4", 00:11:21.706 "uuid": "9f6f8b6f-2ce0-4b6f-ae5e-cae94955680f", 00:11:21.706 "is_configured": true, 00:11:21.706 "data_offset": 0, 00:11:21.706 "data_size": 65536 00:11:21.706 } 00:11:21.706 ] 00:11:21.706 }' 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.706 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.966 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.966 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.966 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.966 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.966 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.966 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.966 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.966 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.966 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.966 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.966 [2024-12-06 23:45:33.522818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.228 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.228 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.228 "name": "Existed_Raid", 00:11:22.228 "aliases": [ 00:11:22.228 "a90ed917-8d5d-4526-9c38-4489cc87224d" 00:11:22.228 ], 00:11:22.228 "product_name": "Raid Volume", 00:11:22.228 "block_size": 512, 00:11:22.228 "num_blocks": 65536, 00:11:22.228 "uuid": "a90ed917-8d5d-4526-9c38-4489cc87224d", 00:11:22.228 "assigned_rate_limits": { 00:11:22.228 "rw_ios_per_sec": 0, 00:11:22.228 "rw_mbytes_per_sec": 0, 00:11:22.228 "r_mbytes_per_sec": 0, 00:11:22.228 "w_mbytes_per_sec": 0 00:11:22.228 }, 00:11:22.228 "claimed": false, 00:11:22.228 "zoned": false, 00:11:22.228 "supported_io_types": { 00:11:22.228 "read": true, 00:11:22.228 "write": true, 00:11:22.228 "unmap": false, 00:11:22.228 "flush": false, 00:11:22.228 "reset": true, 00:11:22.228 "nvme_admin": false, 00:11:22.228 "nvme_io": false, 00:11:22.228 "nvme_io_md": false, 00:11:22.228 "write_zeroes": true, 00:11:22.228 "zcopy": false, 00:11:22.228 "get_zone_info": false, 00:11:22.228 "zone_management": false, 00:11:22.228 "zone_append": false, 00:11:22.228 "compare": false, 00:11:22.228 "compare_and_write": false, 00:11:22.228 "abort": false, 00:11:22.228 "seek_hole": false, 00:11:22.228 "seek_data": false, 00:11:22.228 "copy": false, 00:11:22.228 "nvme_iov_md": false 00:11:22.228 }, 00:11:22.228 "memory_domains": [ 00:11:22.228 { 00:11:22.228 "dma_device_id": "system", 00:11:22.228 "dma_device_type": 1 00:11:22.228 }, 00:11:22.228 { 00:11:22.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.228 "dma_device_type": 2 00:11:22.228 }, 00:11:22.228 { 00:11:22.229 "dma_device_id": "system", 00:11:22.229 "dma_device_type": 1 00:11:22.229 }, 00:11:22.229 { 00:11:22.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.229 "dma_device_type": 2 00:11:22.229 }, 00:11:22.229 { 00:11:22.229 "dma_device_id": "system", 00:11:22.229 "dma_device_type": 1 00:11:22.229 }, 00:11:22.229 { 00:11:22.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.229 "dma_device_type": 2 00:11:22.229 }, 00:11:22.229 { 00:11:22.229 "dma_device_id": "system", 00:11:22.229 "dma_device_type": 1 00:11:22.229 }, 00:11:22.229 { 00:11:22.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.229 "dma_device_type": 2 00:11:22.229 } 00:11:22.229 ], 00:11:22.229 "driver_specific": { 00:11:22.229 "raid": { 00:11:22.229 "uuid": "a90ed917-8d5d-4526-9c38-4489cc87224d", 00:11:22.229 "strip_size_kb": 0, 00:11:22.229 "state": "online", 00:11:22.229 "raid_level": "raid1", 00:11:22.229 "superblock": false, 00:11:22.229 "num_base_bdevs": 4, 00:11:22.229 "num_base_bdevs_discovered": 4, 00:11:22.229 "num_base_bdevs_operational": 4, 00:11:22.229 "base_bdevs_list": [ 00:11:22.229 { 00:11:22.229 "name": "BaseBdev1", 00:11:22.229 "uuid": "2fdd1bba-2f01-4d0c-80fa-45c25dbaa0d2", 00:11:22.229 "is_configured": true, 00:11:22.229 "data_offset": 0, 00:11:22.229 "data_size": 65536 00:11:22.229 }, 00:11:22.229 { 00:11:22.229 "name": "BaseBdev2", 00:11:22.229 "uuid": "9c714bb7-2085-47d6-bde3-781b7b587eb9", 00:11:22.229 "is_configured": true, 00:11:22.229 "data_offset": 0, 00:11:22.229 "data_size": 65536 00:11:22.229 }, 00:11:22.229 { 00:11:22.229 "name": "BaseBdev3", 00:11:22.229 "uuid": "1e6bdc66-5dec-4048-a63c-5783e7f4de14", 00:11:22.229 "is_configured": true, 00:11:22.229 "data_offset": 0, 00:11:22.229 "data_size": 65536 00:11:22.229 }, 00:11:22.229 { 00:11:22.229 "name": "BaseBdev4", 00:11:22.229 "uuid": "9f6f8b6f-2ce0-4b6f-ae5e-cae94955680f", 00:11:22.229 "is_configured": true, 00:11:22.229 "data_offset": 0, 00:11:22.229 "data_size": 65536 00:11:22.229 } 00:11:22.229 ] 00:11:22.229 } 00:11:22.229 } 00:11:22.229 }' 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:22.229 BaseBdev2 00:11:22.229 BaseBdev3 00:11:22.229 BaseBdev4' 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.229 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.488 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.489 [2024-12-06 23:45:33.853886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.489 23:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.489 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.489 "name": "Existed_Raid", 00:11:22.489 "uuid": "a90ed917-8d5d-4526-9c38-4489cc87224d", 00:11:22.489 "strip_size_kb": 0, 00:11:22.489 "state": "online", 00:11:22.489 "raid_level": "raid1", 00:11:22.489 "superblock": false, 00:11:22.489 "num_base_bdevs": 4, 00:11:22.489 "num_base_bdevs_discovered": 3, 00:11:22.489 "num_base_bdevs_operational": 3, 00:11:22.489 "base_bdevs_list": [ 00:11:22.489 { 00:11:22.489 "name": null, 00:11:22.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.489 "is_configured": false, 00:11:22.489 "data_offset": 0, 00:11:22.489 "data_size": 65536 00:11:22.489 }, 00:11:22.489 { 00:11:22.489 "name": "BaseBdev2", 00:11:22.489 "uuid": "9c714bb7-2085-47d6-bde3-781b7b587eb9", 00:11:22.489 "is_configured": true, 00:11:22.489 "data_offset": 0, 00:11:22.489 "data_size": 65536 00:11:22.489 }, 00:11:22.489 { 00:11:22.489 "name": "BaseBdev3", 00:11:22.489 "uuid": "1e6bdc66-5dec-4048-a63c-5783e7f4de14", 00:11:22.489 "is_configured": true, 00:11:22.489 "data_offset": 0, 00:11:22.489 "data_size": 65536 00:11:22.489 }, 00:11:22.489 { 00:11:22.489 "name": "BaseBdev4", 00:11:22.489 "uuid": "9f6f8b6f-2ce0-4b6f-ae5e-cae94955680f", 00:11:22.489 "is_configured": true, 00:11:22.489 "data_offset": 0, 00:11:22.489 "data_size": 65536 00:11:22.489 } 00:11:22.489 ] 00:11:22.489 }' 00:11:22.489 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.489 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.056 [2024-12-06 23:45:34.496771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.056 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.057 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.057 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.057 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.057 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.315 [2024-12-06 23:45:34.661083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.315 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.315 [2024-12-06 23:45:34.825021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:23.315 [2024-12-06 23:45:34.825146] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.574 [2024-12-06 23:45:34.929428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.574 [2024-12-06 23:45:34.929488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.574 [2024-12-06 23:45:34.929503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.574 23:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.574 BaseBdev2 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.574 [ 00:11:23.574 { 00:11:23.574 "name": "BaseBdev2", 00:11:23.574 "aliases": [ 00:11:23.574 "9b8b6ad1-d94c-4a8a-a33e-2976f9aa5cf1" 00:11:23.574 ], 00:11:23.574 "product_name": "Malloc disk", 00:11:23.574 "block_size": 512, 00:11:23.574 "num_blocks": 65536, 00:11:23.574 "uuid": "9b8b6ad1-d94c-4a8a-a33e-2976f9aa5cf1", 00:11:23.574 "assigned_rate_limits": { 00:11:23.574 "rw_ios_per_sec": 0, 00:11:23.574 "rw_mbytes_per_sec": 0, 00:11:23.574 "r_mbytes_per_sec": 0, 00:11:23.574 "w_mbytes_per_sec": 0 00:11:23.574 }, 00:11:23.574 "claimed": false, 00:11:23.574 "zoned": false, 00:11:23.574 "supported_io_types": { 00:11:23.574 "read": true, 00:11:23.574 "write": true, 00:11:23.574 "unmap": true, 00:11:23.574 "flush": true, 00:11:23.574 "reset": true, 00:11:23.574 "nvme_admin": false, 00:11:23.574 "nvme_io": false, 00:11:23.574 "nvme_io_md": false, 00:11:23.574 "write_zeroes": true, 00:11:23.574 "zcopy": true, 00:11:23.574 "get_zone_info": false, 00:11:23.574 "zone_management": false, 00:11:23.574 "zone_append": false, 00:11:23.574 "compare": false, 00:11:23.574 "compare_and_write": false, 00:11:23.574 "abort": true, 00:11:23.574 "seek_hole": false, 00:11:23.574 "seek_data": false, 00:11:23.574 "copy": true, 00:11:23.574 "nvme_iov_md": false 00:11:23.574 }, 00:11:23.574 "memory_domains": [ 00:11:23.574 { 00:11:23.574 "dma_device_id": "system", 00:11:23.574 "dma_device_type": 1 00:11:23.574 }, 00:11:23.574 { 00:11:23.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.574 "dma_device_type": 2 00:11:23.574 } 00:11:23.574 ], 00:11:23.574 "driver_specific": {} 00:11:23.574 } 00:11:23.574 ] 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.574 BaseBdev3 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.574 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.834 [ 00:11:23.834 { 00:11:23.834 "name": "BaseBdev3", 00:11:23.834 "aliases": [ 00:11:23.834 "c4f60c0a-775b-442c-99c3-89799e4cd360" 00:11:23.834 ], 00:11:23.834 "product_name": "Malloc disk", 00:11:23.834 "block_size": 512, 00:11:23.834 "num_blocks": 65536, 00:11:23.834 "uuid": "c4f60c0a-775b-442c-99c3-89799e4cd360", 00:11:23.834 "assigned_rate_limits": { 00:11:23.834 "rw_ios_per_sec": 0, 00:11:23.834 "rw_mbytes_per_sec": 0, 00:11:23.834 "r_mbytes_per_sec": 0, 00:11:23.834 "w_mbytes_per_sec": 0 00:11:23.834 }, 00:11:23.834 "claimed": false, 00:11:23.834 "zoned": false, 00:11:23.834 "supported_io_types": { 00:11:23.834 "read": true, 00:11:23.834 "write": true, 00:11:23.834 "unmap": true, 00:11:23.834 "flush": true, 00:11:23.834 "reset": true, 00:11:23.834 "nvme_admin": false, 00:11:23.834 "nvme_io": false, 00:11:23.834 "nvme_io_md": false, 00:11:23.834 "write_zeroes": true, 00:11:23.834 "zcopy": true, 00:11:23.834 "get_zone_info": false, 00:11:23.834 "zone_management": false, 00:11:23.834 "zone_append": false, 00:11:23.834 "compare": false, 00:11:23.834 "compare_and_write": false, 00:11:23.834 "abort": true, 00:11:23.834 "seek_hole": false, 00:11:23.834 "seek_data": false, 00:11:23.834 "copy": true, 00:11:23.834 "nvme_iov_md": false 00:11:23.834 }, 00:11:23.834 "memory_domains": [ 00:11:23.834 { 00:11:23.834 "dma_device_id": "system", 00:11:23.834 "dma_device_type": 1 00:11:23.834 }, 00:11:23.834 { 00:11:23.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.834 "dma_device_type": 2 00:11:23.834 } 00:11:23.834 ], 00:11:23.834 "driver_specific": {} 00:11:23.834 } 00:11:23.834 ] 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.834 BaseBdev4 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.834 [ 00:11:23.834 { 00:11:23.834 "name": "BaseBdev4", 00:11:23.834 "aliases": [ 00:11:23.834 "f0f07df0-14a4-4456-9b05-2722c8409063" 00:11:23.834 ], 00:11:23.834 "product_name": "Malloc disk", 00:11:23.834 "block_size": 512, 00:11:23.834 "num_blocks": 65536, 00:11:23.834 "uuid": "f0f07df0-14a4-4456-9b05-2722c8409063", 00:11:23.834 "assigned_rate_limits": { 00:11:23.834 "rw_ios_per_sec": 0, 00:11:23.834 "rw_mbytes_per_sec": 0, 00:11:23.834 "r_mbytes_per_sec": 0, 00:11:23.834 "w_mbytes_per_sec": 0 00:11:23.834 }, 00:11:23.834 "claimed": false, 00:11:23.834 "zoned": false, 00:11:23.834 "supported_io_types": { 00:11:23.834 "read": true, 00:11:23.834 "write": true, 00:11:23.834 "unmap": true, 00:11:23.834 "flush": true, 00:11:23.834 "reset": true, 00:11:23.834 "nvme_admin": false, 00:11:23.834 "nvme_io": false, 00:11:23.834 "nvme_io_md": false, 00:11:23.834 "write_zeroes": true, 00:11:23.834 "zcopy": true, 00:11:23.834 "get_zone_info": false, 00:11:23.834 "zone_management": false, 00:11:23.834 "zone_append": false, 00:11:23.834 "compare": false, 00:11:23.834 "compare_and_write": false, 00:11:23.834 "abort": true, 00:11:23.834 "seek_hole": false, 00:11:23.834 "seek_data": false, 00:11:23.834 "copy": true, 00:11:23.834 "nvme_iov_md": false 00:11:23.834 }, 00:11:23.834 "memory_domains": [ 00:11:23.834 { 00:11:23.834 "dma_device_id": "system", 00:11:23.834 "dma_device_type": 1 00:11:23.834 }, 00:11:23.834 { 00:11:23.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.834 "dma_device_type": 2 00:11:23.834 } 00:11:23.834 ], 00:11:23.834 "driver_specific": {} 00:11:23.834 } 00:11:23.834 ] 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.834 [2024-12-06 23:45:35.243210] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.834 [2024-12-06 23:45:35.243342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.834 [2024-12-06 23:45:35.243387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.834 [2024-12-06 23:45:35.245639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.834 [2024-12-06 23:45:35.245760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.834 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.835 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.835 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.835 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.835 "name": "Existed_Raid", 00:11:23.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.835 "strip_size_kb": 0, 00:11:23.835 "state": "configuring", 00:11:23.835 "raid_level": "raid1", 00:11:23.835 "superblock": false, 00:11:23.835 "num_base_bdevs": 4, 00:11:23.835 "num_base_bdevs_discovered": 3, 00:11:23.835 "num_base_bdevs_operational": 4, 00:11:23.835 "base_bdevs_list": [ 00:11:23.835 { 00:11:23.835 "name": "BaseBdev1", 00:11:23.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.835 "is_configured": false, 00:11:23.835 "data_offset": 0, 00:11:23.835 "data_size": 0 00:11:23.835 }, 00:11:23.835 { 00:11:23.835 "name": "BaseBdev2", 00:11:23.835 "uuid": "9b8b6ad1-d94c-4a8a-a33e-2976f9aa5cf1", 00:11:23.835 "is_configured": true, 00:11:23.835 "data_offset": 0, 00:11:23.835 "data_size": 65536 00:11:23.835 }, 00:11:23.835 { 00:11:23.835 "name": "BaseBdev3", 00:11:23.835 "uuid": "c4f60c0a-775b-442c-99c3-89799e4cd360", 00:11:23.835 "is_configured": true, 00:11:23.835 "data_offset": 0, 00:11:23.835 "data_size": 65536 00:11:23.835 }, 00:11:23.835 { 00:11:23.835 "name": "BaseBdev4", 00:11:23.835 "uuid": "f0f07df0-14a4-4456-9b05-2722c8409063", 00:11:23.835 "is_configured": true, 00:11:23.835 "data_offset": 0, 00:11:23.835 "data_size": 65536 00:11:23.835 } 00:11:23.835 ] 00:11:23.835 }' 00:11:23.835 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.835 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.399 [2024-12-06 23:45:35.718564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.399 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.399 "name": "Existed_Raid", 00:11:24.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.399 "strip_size_kb": 0, 00:11:24.399 "state": "configuring", 00:11:24.399 "raid_level": "raid1", 00:11:24.399 "superblock": false, 00:11:24.399 "num_base_bdevs": 4, 00:11:24.399 "num_base_bdevs_discovered": 2, 00:11:24.399 "num_base_bdevs_operational": 4, 00:11:24.399 "base_bdevs_list": [ 00:11:24.399 { 00:11:24.400 "name": "BaseBdev1", 00:11:24.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.400 "is_configured": false, 00:11:24.400 "data_offset": 0, 00:11:24.400 "data_size": 0 00:11:24.400 }, 00:11:24.400 { 00:11:24.400 "name": null, 00:11:24.400 "uuid": "9b8b6ad1-d94c-4a8a-a33e-2976f9aa5cf1", 00:11:24.400 "is_configured": false, 00:11:24.400 "data_offset": 0, 00:11:24.400 "data_size": 65536 00:11:24.400 }, 00:11:24.400 { 00:11:24.400 "name": "BaseBdev3", 00:11:24.400 "uuid": "c4f60c0a-775b-442c-99c3-89799e4cd360", 00:11:24.400 "is_configured": true, 00:11:24.400 "data_offset": 0, 00:11:24.400 "data_size": 65536 00:11:24.400 }, 00:11:24.400 { 00:11:24.400 "name": "BaseBdev4", 00:11:24.400 "uuid": "f0f07df0-14a4-4456-9b05-2722c8409063", 00:11:24.400 "is_configured": true, 00:11:24.400 "data_offset": 0, 00:11:24.400 "data_size": 65536 00:11:24.400 } 00:11:24.400 ] 00:11:24.400 }' 00:11:24.400 23:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.400 23:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.658 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.658 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:24.658 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.658 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.658 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.917 [2024-12-06 23:45:36.272469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.917 BaseBdev1 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.917 [ 00:11:24.917 { 00:11:24.917 "name": "BaseBdev1", 00:11:24.917 "aliases": [ 00:11:24.917 "784e4439-0041-4817-8378-80746dcfca8c" 00:11:24.917 ], 00:11:24.917 "product_name": "Malloc disk", 00:11:24.917 "block_size": 512, 00:11:24.917 "num_blocks": 65536, 00:11:24.917 "uuid": "784e4439-0041-4817-8378-80746dcfca8c", 00:11:24.917 "assigned_rate_limits": { 00:11:24.917 "rw_ios_per_sec": 0, 00:11:24.917 "rw_mbytes_per_sec": 0, 00:11:24.917 "r_mbytes_per_sec": 0, 00:11:24.917 "w_mbytes_per_sec": 0 00:11:24.917 }, 00:11:24.917 "claimed": true, 00:11:24.917 "claim_type": "exclusive_write", 00:11:24.917 "zoned": false, 00:11:24.917 "supported_io_types": { 00:11:24.917 "read": true, 00:11:24.917 "write": true, 00:11:24.917 "unmap": true, 00:11:24.917 "flush": true, 00:11:24.917 "reset": true, 00:11:24.917 "nvme_admin": false, 00:11:24.917 "nvme_io": false, 00:11:24.917 "nvme_io_md": false, 00:11:24.917 "write_zeroes": true, 00:11:24.917 "zcopy": true, 00:11:24.917 "get_zone_info": false, 00:11:24.917 "zone_management": false, 00:11:24.917 "zone_append": false, 00:11:24.917 "compare": false, 00:11:24.917 "compare_and_write": false, 00:11:24.917 "abort": true, 00:11:24.917 "seek_hole": false, 00:11:24.917 "seek_data": false, 00:11:24.917 "copy": true, 00:11:24.917 "nvme_iov_md": false 00:11:24.917 }, 00:11:24.917 "memory_domains": [ 00:11:24.917 { 00:11:24.917 "dma_device_id": "system", 00:11:24.917 "dma_device_type": 1 00:11:24.917 }, 00:11:24.917 { 00:11:24.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.917 "dma_device_type": 2 00:11:24.917 } 00:11:24.917 ], 00:11:24.917 "driver_specific": {} 00:11:24.917 } 00:11:24.917 ] 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.917 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.918 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.918 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.918 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.918 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.918 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.918 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.918 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.918 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.918 "name": "Existed_Raid", 00:11:24.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.918 "strip_size_kb": 0, 00:11:24.918 "state": "configuring", 00:11:24.918 "raid_level": "raid1", 00:11:24.918 "superblock": false, 00:11:24.918 "num_base_bdevs": 4, 00:11:24.918 "num_base_bdevs_discovered": 3, 00:11:24.918 "num_base_bdevs_operational": 4, 00:11:24.918 "base_bdevs_list": [ 00:11:24.918 { 00:11:24.918 "name": "BaseBdev1", 00:11:24.918 "uuid": "784e4439-0041-4817-8378-80746dcfca8c", 00:11:24.918 "is_configured": true, 00:11:24.918 "data_offset": 0, 00:11:24.918 "data_size": 65536 00:11:24.918 }, 00:11:24.918 { 00:11:24.918 "name": null, 00:11:24.918 "uuid": "9b8b6ad1-d94c-4a8a-a33e-2976f9aa5cf1", 00:11:24.918 "is_configured": false, 00:11:24.918 "data_offset": 0, 00:11:24.918 "data_size": 65536 00:11:24.918 }, 00:11:24.918 { 00:11:24.918 "name": "BaseBdev3", 00:11:24.918 "uuid": "c4f60c0a-775b-442c-99c3-89799e4cd360", 00:11:24.918 "is_configured": true, 00:11:24.918 "data_offset": 0, 00:11:24.918 "data_size": 65536 00:11:24.918 }, 00:11:24.918 { 00:11:24.918 "name": "BaseBdev4", 00:11:24.918 "uuid": "f0f07df0-14a4-4456-9b05-2722c8409063", 00:11:24.918 "is_configured": true, 00:11:24.918 "data_offset": 0, 00:11:24.918 "data_size": 65536 00:11:24.918 } 00:11:24.918 ] 00:11:24.918 }' 00:11:24.918 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.918 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.177 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:25.177 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.177 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.177 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.436 [2024-12-06 23:45:36.775812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.436 "name": "Existed_Raid", 00:11:25.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.436 "strip_size_kb": 0, 00:11:25.436 "state": "configuring", 00:11:25.436 "raid_level": "raid1", 00:11:25.436 "superblock": false, 00:11:25.436 "num_base_bdevs": 4, 00:11:25.436 "num_base_bdevs_discovered": 2, 00:11:25.436 "num_base_bdevs_operational": 4, 00:11:25.436 "base_bdevs_list": [ 00:11:25.436 { 00:11:25.436 "name": "BaseBdev1", 00:11:25.436 "uuid": "784e4439-0041-4817-8378-80746dcfca8c", 00:11:25.436 "is_configured": true, 00:11:25.436 "data_offset": 0, 00:11:25.436 "data_size": 65536 00:11:25.436 }, 00:11:25.436 { 00:11:25.436 "name": null, 00:11:25.436 "uuid": "9b8b6ad1-d94c-4a8a-a33e-2976f9aa5cf1", 00:11:25.436 "is_configured": false, 00:11:25.436 "data_offset": 0, 00:11:25.436 "data_size": 65536 00:11:25.436 }, 00:11:25.436 { 00:11:25.436 "name": null, 00:11:25.436 "uuid": "c4f60c0a-775b-442c-99c3-89799e4cd360", 00:11:25.436 "is_configured": false, 00:11:25.436 "data_offset": 0, 00:11:25.436 "data_size": 65536 00:11:25.436 }, 00:11:25.436 { 00:11:25.436 "name": "BaseBdev4", 00:11:25.436 "uuid": "f0f07df0-14a4-4456-9b05-2722c8409063", 00:11:25.436 "is_configured": true, 00:11:25.436 "data_offset": 0, 00:11:25.436 "data_size": 65536 00:11:25.436 } 00:11:25.436 ] 00:11:25.436 }' 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.436 23:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.695 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.696 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.696 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.696 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:25.696 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.696 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:25.696 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:25.696 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.696 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.696 [2024-12-06 23:45:37.250972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.696 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.696 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.696 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.696 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.956 "name": "Existed_Raid", 00:11:25.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.956 "strip_size_kb": 0, 00:11:25.956 "state": "configuring", 00:11:25.956 "raid_level": "raid1", 00:11:25.956 "superblock": false, 00:11:25.956 "num_base_bdevs": 4, 00:11:25.956 "num_base_bdevs_discovered": 3, 00:11:25.956 "num_base_bdevs_operational": 4, 00:11:25.956 "base_bdevs_list": [ 00:11:25.956 { 00:11:25.956 "name": "BaseBdev1", 00:11:25.956 "uuid": "784e4439-0041-4817-8378-80746dcfca8c", 00:11:25.956 "is_configured": true, 00:11:25.956 "data_offset": 0, 00:11:25.956 "data_size": 65536 00:11:25.956 }, 00:11:25.956 { 00:11:25.956 "name": null, 00:11:25.956 "uuid": "9b8b6ad1-d94c-4a8a-a33e-2976f9aa5cf1", 00:11:25.956 "is_configured": false, 00:11:25.956 "data_offset": 0, 00:11:25.956 "data_size": 65536 00:11:25.956 }, 00:11:25.956 { 00:11:25.956 "name": "BaseBdev3", 00:11:25.956 "uuid": "c4f60c0a-775b-442c-99c3-89799e4cd360", 00:11:25.956 "is_configured": true, 00:11:25.956 "data_offset": 0, 00:11:25.956 "data_size": 65536 00:11:25.956 }, 00:11:25.956 { 00:11:25.956 "name": "BaseBdev4", 00:11:25.956 "uuid": "f0f07df0-14a4-4456-9b05-2722c8409063", 00:11:25.956 "is_configured": true, 00:11:25.956 "data_offset": 0, 00:11:25.956 "data_size": 65536 00:11:25.956 } 00:11:25.956 ] 00:11:25.956 }' 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.956 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.216 [2024-12-06 23:45:37.666363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.216 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.475 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.475 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.475 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.475 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.475 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.475 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.475 "name": "Existed_Raid", 00:11:26.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.475 "strip_size_kb": 0, 00:11:26.475 "state": "configuring", 00:11:26.475 "raid_level": "raid1", 00:11:26.475 "superblock": false, 00:11:26.475 "num_base_bdevs": 4, 00:11:26.475 "num_base_bdevs_discovered": 2, 00:11:26.475 "num_base_bdevs_operational": 4, 00:11:26.475 "base_bdevs_list": [ 00:11:26.475 { 00:11:26.475 "name": null, 00:11:26.475 "uuid": "784e4439-0041-4817-8378-80746dcfca8c", 00:11:26.475 "is_configured": false, 00:11:26.475 "data_offset": 0, 00:11:26.475 "data_size": 65536 00:11:26.475 }, 00:11:26.475 { 00:11:26.475 "name": null, 00:11:26.475 "uuid": "9b8b6ad1-d94c-4a8a-a33e-2976f9aa5cf1", 00:11:26.475 "is_configured": false, 00:11:26.475 "data_offset": 0, 00:11:26.475 "data_size": 65536 00:11:26.475 }, 00:11:26.476 { 00:11:26.476 "name": "BaseBdev3", 00:11:26.476 "uuid": "c4f60c0a-775b-442c-99c3-89799e4cd360", 00:11:26.476 "is_configured": true, 00:11:26.476 "data_offset": 0, 00:11:26.476 "data_size": 65536 00:11:26.476 }, 00:11:26.476 { 00:11:26.476 "name": "BaseBdev4", 00:11:26.476 "uuid": "f0f07df0-14a4-4456-9b05-2722c8409063", 00:11:26.476 "is_configured": true, 00:11:26.476 "data_offset": 0, 00:11:26.476 "data_size": 65536 00:11:26.476 } 00:11:26.476 ] 00:11:26.476 }' 00:11:26.476 23:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.476 23:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.734 [2024-12-06 23:45:38.286452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.734 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.735 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.735 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.735 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.994 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.994 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.994 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.994 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.994 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.994 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.994 "name": "Existed_Raid", 00:11:26.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.994 "strip_size_kb": 0, 00:11:26.994 "state": "configuring", 00:11:26.994 "raid_level": "raid1", 00:11:26.994 "superblock": false, 00:11:26.994 "num_base_bdevs": 4, 00:11:26.994 "num_base_bdevs_discovered": 3, 00:11:26.994 "num_base_bdevs_operational": 4, 00:11:26.994 "base_bdevs_list": [ 00:11:26.994 { 00:11:26.994 "name": null, 00:11:26.994 "uuid": "784e4439-0041-4817-8378-80746dcfca8c", 00:11:26.994 "is_configured": false, 00:11:26.994 "data_offset": 0, 00:11:26.994 "data_size": 65536 00:11:26.994 }, 00:11:26.994 { 00:11:26.994 "name": "BaseBdev2", 00:11:26.994 "uuid": "9b8b6ad1-d94c-4a8a-a33e-2976f9aa5cf1", 00:11:26.994 "is_configured": true, 00:11:26.994 "data_offset": 0, 00:11:26.994 "data_size": 65536 00:11:26.994 }, 00:11:26.994 { 00:11:26.994 "name": "BaseBdev3", 00:11:26.994 "uuid": "c4f60c0a-775b-442c-99c3-89799e4cd360", 00:11:26.994 "is_configured": true, 00:11:26.994 "data_offset": 0, 00:11:26.994 "data_size": 65536 00:11:26.994 }, 00:11:26.994 { 00:11:26.994 "name": "BaseBdev4", 00:11:26.994 "uuid": "f0f07df0-14a4-4456-9b05-2722c8409063", 00:11:26.994 "is_configured": true, 00:11:26.994 "data_offset": 0, 00:11:26.994 "data_size": 65536 00:11:26.994 } 00:11:26.994 ] 00:11:26.994 }' 00:11:26.994 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.994 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.254 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.254 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.254 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.254 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.254 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.254 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 784e4439-0041-4817-8378-80746dcfca8c 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.513 [2024-12-06 23:45:38.904666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:27.513 [2024-12-06 23:45:38.904738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:27.513 [2024-12-06 23:45:38.904749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:27.513 [2024-12-06 23:45:38.905041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:27.513 [2024-12-06 23:45:38.905226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:27.513 [2024-12-06 23:45:38.905236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:27.513 [2024-12-06 23:45:38.905549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.513 NewBaseBdev 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.513 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.513 [ 00:11:27.513 { 00:11:27.514 "name": "NewBaseBdev", 00:11:27.514 "aliases": [ 00:11:27.514 "784e4439-0041-4817-8378-80746dcfca8c" 00:11:27.514 ], 00:11:27.514 "product_name": "Malloc disk", 00:11:27.514 "block_size": 512, 00:11:27.514 "num_blocks": 65536, 00:11:27.514 "uuid": "784e4439-0041-4817-8378-80746dcfca8c", 00:11:27.514 "assigned_rate_limits": { 00:11:27.514 "rw_ios_per_sec": 0, 00:11:27.514 "rw_mbytes_per_sec": 0, 00:11:27.514 "r_mbytes_per_sec": 0, 00:11:27.514 "w_mbytes_per_sec": 0 00:11:27.514 }, 00:11:27.514 "claimed": true, 00:11:27.514 "claim_type": "exclusive_write", 00:11:27.514 "zoned": false, 00:11:27.514 "supported_io_types": { 00:11:27.514 "read": true, 00:11:27.514 "write": true, 00:11:27.514 "unmap": true, 00:11:27.514 "flush": true, 00:11:27.514 "reset": true, 00:11:27.514 "nvme_admin": false, 00:11:27.514 "nvme_io": false, 00:11:27.514 "nvme_io_md": false, 00:11:27.514 "write_zeroes": true, 00:11:27.514 "zcopy": true, 00:11:27.514 "get_zone_info": false, 00:11:27.514 "zone_management": false, 00:11:27.514 "zone_append": false, 00:11:27.514 "compare": false, 00:11:27.514 "compare_and_write": false, 00:11:27.514 "abort": true, 00:11:27.514 "seek_hole": false, 00:11:27.514 "seek_data": false, 00:11:27.514 "copy": true, 00:11:27.514 "nvme_iov_md": false 00:11:27.514 }, 00:11:27.514 "memory_domains": [ 00:11:27.514 { 00:11:27.514 "dma_device_id": "system", 00:11:27.514 "dma_device_type": 1 00:11:27.514 }, 00:11:27.514 { 00:11:27.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.514 "dma_device_type": 2 00:11:27.514 } 00:11:27.514 ], 00:11:27.514 "driver_specific": {} 00:11:27.514 } 00:11:27.514 ] 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.514 "name": "Existed_Raid", 00:11:27.514 "uuid": "d9fdf952-3d3b-490d-ba99-82b67e4318af", 00:11:27.514 "strip_size_kb": 0, 00:11:27.514 "state": "online", 00:11:27.514 "raid_level": "raid1", 00:11:27.514 "superblock": false, 00:11:27.514 "num_base_bdevs": 4, 00:11:27.514 "num_base_bdevs_discovered": 4, 00:11:27.514 "num_base_bdevs_operational": 4, 00:11:27.514 "base_bdevs_list": [ 00:11:27.514 { 00:11:27.514 "name": "NewBaseBdev", 00:11:27.514 "uuid": "784e4439-0041-4817-8378-80746dcfca8c", 00:11:27.514 "is_configured": true, 00:11:27.514 "data_offset": 0, 00:11:27.514 "data_size": 65536 00:11:27.514 }, 00:11:27.514 { 00:11:27.514 "name": "BaseBdev2", 00:11:27.514 "uuid": "9b8b6ad1-d94c-4a8a-a33e-2976f9aa5cf1", 00:11:27.514 "is_configured": true, 00:11:27.514 "data_offset": 0, 00:11:27.514 "data_size": 65536 00:11:27.514 }, 00:11:27.514 { 00:11:27.514 "name": "BaseBdev3", 00:11:27.514 "uuid": "c4f60c0a-775b-442c-99c3-89799e4cd360", 00:11:27.514 "is_configured": true, 00:11:27.514 "data_offset": 0, 00:11:27.514 "data_size": 65536 00:11:27.514 }, 00:11:27.514 { 00:11:27.514 "name": "BaseBdev4", 00:11:27.514 "uuid": "f0f07df0-14a4-4456-9b05-2722c8409063", 00:11:27.514 "is_configured": true, 00:11:27.514 "data_offset": 0, 00:11:27.514 "data_size": 65536 00:11:27.514 } 00:11:27.514 ] 00:11:27.514 }' 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.514 23:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.082 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:28.082 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:28.082 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.082 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.082 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.082 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.082 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.082 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:28.082 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.082 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.083 [2024-12-06 23:45:39.364336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.083 "name": "Existed_Raid", 00:11:28.083 "aliases": [ 00:11:28.083 "d9fdf952-3d3b-490d-ba99-82b67e4318af" 00:11:28.083 ], 00:11:28.083 "product_name": "Raid Volume", 00:11:28.083 "block_size": 512, 00:11:28.083 "num_blocks": 65536, 00:11:28.083 "uuid": "d9fdf952-3d3b-490d-ba99-82b67e4318af", 00:11:28.083 "assigned_rate_limits": { 00:11:28.083 "rw_ios_per_sec": 0, 00:11:28.083 "rw_mbytes_per_sec": 0, 00:11:28.083 "r_mbytes_per_sec": 0, 00:11:28.083 "w_mbytes_per_sec": 0 00:11:28.083 }, 00:11:28.083 "claimed": false, 00:11:28.083 "zoned": false, 00:11:28.083 "supported_io_types": { 00:11:28.083 "read": true, 00:11:28.083 "write": true, 00:11:28.083 "unmap": false, 00:11:28.083 "flush": false, 00:11:28.083 "reset": true, 00:11:28.083 "nvme_admin": false, 00:11:28.083 "nvme_io": false, 00:11:28.083 "nvme_io_md": false, 00:11:28.083 "write_zeroes": true, 00:11:28.083 "zcopy": false, 00:11:28.083 "get_zone_info": false, 00:11:28.083 "zone_management": false, 00:11:28.083 "zone_append": false, 00:11:28.083 "compare": false, 00:11:28.083 "compare_and_write": false, 00:11:28.083 "abort": false, 00:11:28.083 "seek_hole": false, 00:11:28.083 "seek_data": false, 00:11:28.083 "copy": false, 00:11:28.083 "nvme_iov_md": false 00:11:28.083 }, 00:11:28.083 "memory_domains": [ 00:11:28.083 { 00:11:28.083 "dma_device_id": "system", 00:11:28.083 "dma_device_type": 1 00:11:28.083 }, 00:11:28.083 { 00:11:28.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.083 "dma_device_type": 2 00:11:28.083 }, 00:11:28.083 { 00:11:28.083 "dma_device_id": "system", 00:11:28.083 "dma_device_type": 1 00:11:28.083 }, 00:11:28.083 { 00:11:28.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.083 "dma_device_type": 2 00:11:28.083 }, 00:11:28.083 { 00:11:28.083 "dma_device_id": "system", 00:11:28.083 "dma_device_type": 1 00:11:28.083 }, 00:11:28.083 { 00:11:28.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.083 "dma_device_type": 2 00:11:28.083 }, 00:11:28.083 { 00:11:28.083 "dma_device_id": "system", 00:11:28.083 "dma_device_type": 1 00:11:28.083 }, 00:11:28.083 { 00:11:28.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.083 "dma_device_type": 2 00:11:28.083 } 00:11:28.083 ], 00:11:28.083 "driver_specific": { 00:11:28.083 "raid": { 00:11:28.083 "uuid": "d9fdf952-3d3b-490d-ba99-82b67e4318af", 00:11:28.083 "strip_size_kb": 0, 00:11:28.083 "state": "online", 00:11:28.083 "raid_level": "raid1", 00:11:28.083 "superblock": false, 00:11:28.083 "num_base_bdevs": 4, 00:11:28.083 "num_base_bdevs_discovered": 4, 00:11:28.083 "num_base_bdevs_operational": 4, 00:11:28.083 "base_bdevs_list": [ 00:11:28.083 { 00:11:28.083 "name": "NewBaseBdev", 00:11:28.083 "uuid": "784e4439-0041-4817-8378-80746dcfca8c", 00:11:28.083 "is_configured": true, 00:11:28.083 "data_offset": 0, 00:11:28.083 "data_size": 65536 00:11:28.083 }, 00:11:28.083 { 00:11:28.083 "name": "BaseBdev2", 00:11:28.083 "uuid": "9b8b6ad1-d94c-4a8a-a33e-2976f9aa5cf1", 00:11:28.083 "is_configured": true, 00:11:28.083 "data_offset": 0, 00:11:28.083 "data_size": 65536 00:11:28.083 }, 00:11:28.083 { 00:11:28.083 "name": "BaseBdev3", 00:11:28.083 "uuid": "c4f60c0a-775b-442c-99c3-89799e4cd360", 00:11:28.083 "is_configured": true, 00:11:28.083 "data_offset": 0, 00:11:28.083 "data_size": 65536 00:11:28.083 }, 00:11:28.083 { 00:11:28.083 "name": "BaseBdev4", 00:11:28.083 "uuid": "f0f07df0-14a4-4456-9b05-2722c8409063", 00:11:28.083 "is_configured": true, 00:11:28.083 "data_offset": 0, 00:11:28.083 "data_size": 65536 00:11:28.083 } 00:11:28.083 ] 00:11:28.083 } 00:11:28.083 } 00:11:28.083 }' 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:28.083 BaseBdev2 00:11:28.083 BaseBdev3 00:11:28.083 BaseBdev4' 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.083 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.342 [2024-12-06 23:45:39.671385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.342 [2024-12-06 23:45:39.671431] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.342 [2024-12-06 23:45:39.671530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.342 [2024-12-06 23:45:39.671876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.342 [2024-12-06 23:45:39.671893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73100 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73100 ']' 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73100 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73100 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.342 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73100' 00:11:28.343 killing process with pid 73100 00:11:28.343 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73100 00:11:28.343 [2024-12-06 23:45:39.721785] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.343 23:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73100 00:11:28.602 [2024-12-06 23:45:40.156484] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:29.980 00:11:29.980 real 0m11.855s 00:11:29.980 user 0m18.564s 00:11:29.980 sys 0m2.251s 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.980 ************************************ 00:11:29.980 END TEST raid_state_function_test 00:11:29.980 ************************************ 00:11:29.980 23:45:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:29.980 23:45:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.980 23:45:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.980 23:45:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.980 ************************************ 00:11:29.980 START TEST raid_state_function_test_sb 00:11:29.980 ************************************ 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:29.980 Process raid pid: 73779 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73779 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73779' 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73779 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73779 ']' 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.980 23:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.981 23:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.239 [2024-12-06 23:45:41.572886] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:11:30.239 [2024-12-06 23:45:41.573014] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.239 [2024-12-06 23:45:41.746117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.499 [2024-12-06 23:45:41.885933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.758 [2024-12-06 23:45:42.131492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.758 [2024-12-06 23:45:42.131624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.018 [2024-12-06 23:45:42.405969] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.018 [2024-12-06 23:45:42.406044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.018 [2024-12-06 23:45:42.406056] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.018 [2024-12-06 23:45:42.406067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.018 [2024-12-06 23:45:42.406073] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.018 [2024-12-06 23:45:42.406082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.018 [2024-12-06 23:45:42.406088] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.018 [2024-12-06 23:45:42.406097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.018 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.018 "name": "Existed_Raid", 00:11:31.018 "uuid": "a69e6794-2dfa-4b37-ba82-a35f445dde57", 00:11:31.018 "strip_size_kb": 0, 00:11:31.018 "state": "configuring", 00:11:31.018 "raid_level": "raid1", 00:11:31.018 "superblock": true, 00:11:31.018 "num_base_bdevs": 4, 00:11:31.018 "num_base_bdevs_discovered": 0, 00:11:31.018 "num_base_bdevs_operational": 4, 00:11:31.018 "base_bdevs_list": [ 00:11:31.018 { 00:11:31.018 "name": "BaseBdev1", 00:11:31.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.018 "is_configured": false, 00:11:31.018 "data_offset": 0, 00:11:31.018 "data_size": 0 00:11:31.018 }, 00:11:31.018 { 00:11:31.018 "name": "BaseBdev2", 00:11:31.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.018 "is_configured": false, 00:11:31.018 "data_offset": 0, 00:11:31.018 "data_size": 0 00:11:31.019 }, 00:11:31.019 { 00:11:31.019 "name": "BaseBdev3", 00:11:31.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.019 "is_configured": false, 00:11:31.019 "data_offset": 0, 00:11:31.019 "data_size": 0 00:11:31.019 }, 00:11:31.019 { 00:11:31.019 "name": "BaseBdev4", 00:11:31.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.019 "is_configured": false, 00:11:31.019 "data_offset": 0, 00:11:31.019 "data_size": 0 00:11:31.019 } 00:11:31.019 ] 00:11:31.019 }' 00:11:31.019 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.019 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.278 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.278 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.278 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.278 [2024-12-06 23:45:42.797264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.278 [2024-12-06 23:45:42.797392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:31.278 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.278 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.278 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.278 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.278 [2024-12-06 23:45:42.805213] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.278 [2024-12-06 23:45:42.805297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.278 [2024-12-06 23:45:42.805326] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.278 [2024-12-06 23:45:42.805349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.278 [2024-12-06 23:45:42.805366] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.278 [2024-12-06 23:45:42.805386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.278 [2024-12-06 23:45:42.805403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.278 [2024-12-06 23:45:42.805422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.278 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.278 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:31.278 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.278 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.566 [2024-12-06 23:45:42.856029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.566 BaseBdev1 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.566 [ 00:11:31.566 { 00:11:31.566 "name": "BaseBdev1", 00:11:31.566 "aliases": [ 00:11:31.566 "37a1fd8f-2d4b-444a-a79b-3c41e1c8091d" 00:11:31.566 ], 00:11:31.566 "product_name": "Malloc disk", 00:11:31.566 "block_size": 512, 00:11:31.566 "num_blocks": 65536, 00:11:31.566 "uuid": "37a1fd8f-2d4b-444a-a79b-3c41e1c8091d", 00:11:31.566 "assigned_rate_limits": { 00:11:31.566 "rw_ios_per_sec": 0, 00:11:31.566 "rw_mbytes_per_sec": 0, 00:11:31.566 "r_mbytes_per_sec": 0, 00:11:31.566 "w_mbytes_per_sec": 0 00:11:31.566 }, 00:11:31.566 "claimed": true, 00:11:31.566 "claim_type": "exclusive_write", 00:11:31.566 "zoned": false, 00:11:31.566 "supported_io_types": { 00:11:31.566 "read": true, 00:11:31.566 "write": true, 00:11:31.566 "unmap": true, 00:11:31.566 "flush": true, 00:11:31.566 "reset": true, 00:11:31.566 "nvme_admin": false, 00:11:31.566 "nvme_io": false, 00:11:31.566 "nvme_io_md": false, 00:11:31.566 "write_zeroes": true, 00:11:31.566 "zcopy": true, 00:11:31.566 "get_zone_info": false, 00:11:31.566 "zone_management": false, 00:11:31.566 "zone_append": false, 00:11:31.566 "compare": false, 00:11:31.566 "compare_and_write": false, 00:11:31.566 "abort": true, 00:11:31.566 "seek_hole": false, 00:11:31.566 "seek_data": false, 00:11:31.566 "copy": true, 00:11:31.566 "nvme_iov_md": false 00:11:31.566 }, 00:11:31.566 "memory_domains": [ 00:11:31.566 { 00:11:31.566 "dma_device_id": "system", 00:11:31.566 "dma_device_type": 1 00:11:31.566 }, 00:11:31.566 { 00:11:31.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.566 "dma_device_type": 2 00:11:31.566 } 00:11:31.566 ], 00:11:31.566 "driver_specific": {} 00:11:31.566 } 00:11:31.566 ] 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.566 "name": "Existed_Raid", 00:11:31.566 "uuid": "663676ec-6f2a-490b-b19c-50ca7ce015b0", 00:11:31.566 "strip_size_kb": 0, 00:11:31.566 "state": "configuring", 00:11:31.566 "raid_level": "raid1", 00:11:31.566 "superblock": true, 00:11:31.566 "num_base_bdevs": 4, 00:11:31.566 "num_base_bdevs_discovered": 1, 00:11:31.566 "num_base_bdevs_operational": 4, 00:11:31.566 "base_bdevs_list": [ 00:11:31.566 { 00:11:31.566 "name": "BaseBdev1", 00:11:31.566 "uuid": "37a1fd8f-2d4b-444a-a79b-3c41e1c8091d", 00:11:31.566 "is_configured": true, 00:11:31.566 "data_offset": 2048, 00:11:31.566 "data_size": 63488 00:11:31.566 }, 00:11:31.566 { 00:11:31.566 "name": "BaseBdev2", 00:11:31.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.566 "is_configured": false, 00:11:31.566 "data_offset": 0, 00:11:31.566 "data_size": 0 00:11:31.566 }, 00:11:31.566 { 00:11:31.566 "name": "BaseBdev3", 00:11:31.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.566 "is_configured": false, 00:11:31.566 "data_offset": 0, 00:11:31.566 "data_size": 0 00:11:31.566 }, 00:11:31.566 { 00:11:31.566 "name": "BaseBdev4", 00:11:31.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.566 "is_configured": false, 00:11:31.566 "data_offset": 0, 00:11:31.566 "data_size": 0 00:11:31.566 } 00:11:31.566 ] 00:11:31.566 }' 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.566 23:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.836 [2024-12-06 23:45:43.355295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.836 [2024-12-06 23:45:43.355372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.836 [2024-12-06 23:45:43.363304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.836 [2024-12-06 23:45:43.365425] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.836 [2024-12-06 23:45:43.365467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.836 [2024-12-06 23:45:43.365478] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.836 [2024-12-06 23:45:43.365490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.836 [2024-12-06 23:45:43.365498] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.836 [2024-12-06 23:45:43.365506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.836 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.096 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.096 "name": "Existed_Raid", 00:11:32.096 "uuid": "323dc2d8-2057-417b-ad48-42aa5fc9d33a", 00:11:32.096 "strip_size_kb": 0, 00:11:32.096 "state": "configuring", 00:11:32.096 "raid_level": "raid1", 00:11:32.096 "superblock": true, 00:11:32.096 "num_base_bdevs": 4, 00:11:32.096 "num_base_bdevs_discovered": 1, 00:11:32.096 "num_base_bdevs_operational": 4, 00:11:32.096 "base_bdevs_list": [ 00:11:32.096 { 00:11:32.096 "name": "BaseBdev1", 00:11:32.096 "uuid": "37a1fd8f-2d4b-444a-a79b-3c41e1c8091d", 00:11:32.096 "is_configured": true, 00:11:32.096 "data_offset": 2048, 00:11:32.096 "data_size": 63488 00:11:32.096 }, 00:11:32.096 { 00:11:32.096 "name": "BaseBdev2", 00:11:32.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.096 "is_configured": false, 00:11:32.096 "data_offset": 0, 00:11:32.096 "data_size": 0 00:11:32.096 }, 00:11:32.096 { 00:11:32.096 "name": "BaseBdev3", 00:11:32.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.096 "is_configured": false, 00:11:32.096 "data_offset": 0, 00:11:32.096 "data_size": 0 00:11:32.096 }, 00:11:32.096 { 00:11:32.096 "name": "BaseBdev4", 00:11:32.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.096 "is_configured": false, 00:11:32.096 "data_offset": 0, 00:11:32.096 "data_size": 0 00:11:32.096 } 00:11:32.096 ] 00:11:32.096 }' 00:11:32.096 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.096 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.357 [2024-12-06 23:45:43.798757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.357 BaseBdev2 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.357 [ 00:11:32.357 { 00:11:32.357 "name": "BaseBdev2", 00:11:32.357 "aliases": [ 00:11:32.357 "888dfc09-1c44-4766-8e87-cd0ae0c1c9e4" 00:11:32.357 ], 00:11:32.357 "product_name": "Malloc disk", 00:11:32.357 "block_size": 512, 00:11:32.357 "num_blocks": 65536, 00:11:32.357 "uuid": "888dfc09-1c44-4766-8e87-cd0ae0c1c9e4", 00:11:32.357 "assigned_rate_limits": { 00:11:32.357 "rw_ios_per_sec": 0, 00:11:32.357 "rw_mbytes_per_sec": 0, 00:11:32.357 "r_mbytes_per_sec": 0, 00:11:32.357 "w_mbytes_per_sec": 0 00:11:32.357 }, 00:11:32.357 "claimed": true, 00:11:32.357 "claim_type": "exclusive_write", 00:11:32.357 "zoned": false, 00:11:32.357 "supported_io_types": { 00:11:32.357 "read": true, 00:11:32.357 "write": true, 00:11:32.357 "unmap": true, 00:11:32.357 "flush": true, 00:11:32.357 "reset": true, 00:11:32.357 "nvme_admin": false, 00:11:32.357 "nvme_io": false, 00:11:32.357 "nvme_io_md": false, 00:11:32.357 "write_zeroes": true, 00:11:32.357 "zcopy": true, 00:11:32.357 "get_zone_info": false, 00:11:32.357 "zone_management": false, 00:11:32.357 "zone_append": false, 00:11:32.357 "compare": false, 00:11:32.357 "compare_and_write": false, 00:11:32.357 "abort": true, 00:11:32.357 "seek_hole": false, 00:11:32.357 "seek_data": false, 00:11:32.357 "copy": true, 00:11:32.357 "nvme_iov_md": false 00:11:32.357 }, 00:11:32.357 "memory_domains": [ 00:11:32.357 { 00:11:32.357 "dma_device_id": "system", 00:11:32.357 "dma_device_type": 1 00:11:32.357 }, 00:11:32.357 { 00:11:32.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.357 "dma_device_type": 2 00:11:32.357 } 00:11:32.357 ], 00:11:32.357 "driver_specific": {} 00:11:32.357 } 00:11:32.357 ] 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.357 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.358 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.358 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.358 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.358 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.358 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.358 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.358 "name": "Existed_Raid", 00:11:32.358 "uuid": "323dc2d8-2057-417b-ad48-42aa5fc9d33a", 00:11:32.358 "strip_size_kb": 0, 00:11:32.358 "state": "configuring", 00:11:32.358 "raid_level": "raid1", 00:11:32.358 "superblock": true, 00:11:32.358 "num_base_bdevs": 4, 00:11:32.358 "num_base_bdevs_discovered": 2, 00:11:32.358 "num_base_bdevs_operational": 4, 00:11:32.358 "base_bdevs_list": [ 00:11:32.358 { 00:11:32.358 "name": "BaseBdev1", 00:11:32.358 "uuid": "37a1fd8f-2d4b-444a-a79b-3c41e1c8091d", 00:11:32.358 "is_configured": true, 00:11:32.358 "data_offset": 2048, 00:11:32.358 "data_size": 63488 00:11:32.358 }, 00:11:32.358 { 00:11:32.358 "name": "BaseBdev2", 00:11:32.358 "uuid": "888dfc09-1c44-4766-8e87-cd0ae0c1c9e4", 00:11:32.358 "is_configured": true, 00:11:32.358 "data_offset": 2048, 00:11:32.358 "data_size": 63488 00:11:32.358 }, 00:11:32.358 { 00:11:32.358 "name": "BaseBdev3", 00:11:32.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.358 "is_configured": false, 00:11:32.358 "data_offset": 0, 00:11:32.358 "data_size": 0 00:11:32.358 }, 00:11:32.358 { 00:11:32.358 "name": "BaseBdev4", 00:11:32.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.358 "is_configured": false, 00:11:32.358 "data_offset": 0, 00:11:32.358 "data_size": 0 00:11:32.358 } 00:11:32.358 ] 00:11:32.358 }' 00:11:32.358 23:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.358 23:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.929 [2024-12-06 23:45:44.315751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.929 BaseBdev3 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.929 [ 00:11:32.929 { 00:11:32.929 "name": "BaseBdev3", 00:11:32.929 "aliases": [ 00:11:32.929 "9b6475ff-7839-4019-a2be-71e856dfa5dc" 00:11:32.929 ], 00:11:32.929 "product_name": "Malloc disk", 00:11:32.929 "block_size": 512, 00:11:32.929 "num_blocks": 65536, 00:11:32.929 "uuid": "9b6475ff-7839-4019-a2be-71e856dfa5dc", 00:11:32.929 "assigned_rate_limits": { 00:11:32.929 "rw_ios_per_sec": 0, 00:11:32.929 "rw_mbytes_per_sec": 0, 00:11:32.929 "r_mbytes_per_sec": 0, 00:11:32.929 "w_mbytes_per_sec": 0 00:11:32.929 }, 00:11:32.929 "claimed": true, 00:11:32.929 "claim_type": "exclusive_write", 00:11:32.929 "zoned": false, 00:11:32.929 "supported_io_types": { 00:11:32.929 "read": true, 00:11:32.929 "write": true, 00:11:32.929 "unmap": true, 00:11:32.929 "flush": true, 00:11:32.929 "reset": true, 00:11:32.929 "nvme_admin": false, 00:11:32.929 "nvme_io": false, 00:11:32.929 "nvme_io_md": false, 00:11:32.929 "write_zeroes": true, 00:11:32.929 "zcopy": true, 00:11:32.929 "get_zone_info": false, 00:11:32.929 "zone_management": false, 00:11:32.929 "zone_append": false, 00:11:32.929 "compare": false, 00:11:32.929 "compare_and_write": false, 00:11:32.929 "abort": true, 00:11:32.929 "seek_hole": false, 00:11:32.929 "seek_data": false, 00:11:32.929 "copy": true, 00:11:32.929 "nvme_iov_md": false 00:11:32.929 }, 00:11:32.929 "memory_domains": [ 00:11:32.929 { 00:11:32.929 "dma_device_id": "system", 00:11:32.929 "dma_device_type": 1 00:11:32.929 }, 00:11:32.929 { 00:11:32.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.929 "dma_device_type": 2 00:11:32.929 } 00:11:32.929 ], 00:11:32.929 "driver_specific": {} 00:11:32.929 } 00:11:32.929 ] 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.929 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.929 "name": "Existed_Raid", 00:11:32.929 "uuid": "323dc2d8-2057-417b-ad48-42aa5fc9d33a", 00:11:32.929 "strip_size_kb": 0, 00:11:32.929 "state": "configuring", 00:11:32.930 "raid_level": "raid1", 00:11:32.930 "superblock": true, 00:11:32.930 "num_base_bdevs": 4, 00:11:32.930 "num_base_bdevs_discovered": 3, 00:11:32.930 "num_base_bdevs_operational": 4, 00:11:32.930 "base_bdevs_list": [ 00:11:32.930 { 00:11:32.930 "name": "BaseBdev1", 00:11:32.930 "uuid": "37a1fd8f-2d4b-444a-a79b-3c41e1c8091d", 00:11:32.930 "is_configured": true, 00:11:32.930 "data_offset": 2048, 00:11:32.930 "data_size": 63488 00:11:32.930 }, 00:11:32.930 { 00:11:32.930 "name": "BaseBdev2", 00:11:32.930 "uuid": "888dfc09-1c44-4766-8e87-cd0ae0c1c9e4", 00:11:32.930 "is_configured": true, 00:11:32.930 "data_offset": 2048, 00:11:32.930 "data_size": 63488 00:11:32.930 }, 00:11:32.930 { 00:11:32.930 "name": "BaseBdev3", 00:11:32.930 "uuid": "9b6475ff-7839-4019-a2be-71e856dfa5dc", 00:11:32.930 "is_configured": true, 00:11:32.930 "data_offset": 2048, 00:11:32.930 "data_size": 63488 00:11:32.930 }, 00:11:32.930 { 00:11:32.930 "name": "BaseBdev4", 00:11:32.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.930 "is_configured": false, 00:11:32.930 "data_offset": 0, 00:11:32.930 "data_size": 0 00:11:32.930 } 00:11:32.930 ] 00:11:32.930 }' 00:11:32.930 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.930 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.499 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:33.499 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.499 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.499 [2024-12-06 23:45:44.871701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.499 [2024-12-06 23:45:44.872011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:33.499 [2024-12-06 23:45:44.872036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:33.499 [2024-12-06 23:45:44.872349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:33.499 BaseBdev4 00:11:33.499 [2024-12-06 23:45:44.872537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:33.499 [2024-12-06 23:45:44.872553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:33.499 [2024-12-06 23:45:44.872732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.499 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.499 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:33.499 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:33.499 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.499 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:33.499 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.499 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.500 [ 00:11:33.500 { 00:11:33.500 "name": "BaseBdev4", 00:11:33.500 "aliases": [ 00:11:33.500 "03cd8b3d-cd96-4cbe-90ce-353f06884217" 00:11:33.500 ], 00:11:33.500 "product_name": "Malloc disk", 00:11:33.500 "block_size": 512, 00:11:33.500 "num_blocks": 65536, 00:11:33.500 "uuid": "03cd8b3d-cd96-4cbe-90ce-353f06884217", 00:11:33.500 "assigned_rate_limits": { 00:11:33.500 "rw_ios_per_sec": 0, 00:11:33.500 "rw_mbytes_per_sec": 0, 00:11:33.500 "r_mbytes_per_sec": 0, 00:11:33.500 "w_mbytes_per_sec": 0 00:11:33.500 }, 00:11:33.500 "claimed": true, 00:11:33.500 "claim_type": "exclusive_write", 00:11:33.500 "zoned": false, 00:11:33.500 "supported_io_types": { 00:11:33.500 "read": true, 00:11:33.500 "write": true, 00:11:33.500 "unmap": true, 00:11:33.500 "flush": true, 00:11:33.500 "reset": true, 00:11:33.500 "nvme_admin": false, 00:11:33.500 "nvme_io": false, 00:11:33.500 "nvme_io_md": false, 00:11:33.500 "write_zeroes": true, 00:11:33.500 "zcopy": true, 00:11:33.500 "get_zone_info": false, 00:11:33.500 "zone_management": false, 00:11:33.500 "zone_append": false, 00:11:33.500 "compare": false, 00:11:33.500 "compare_and_write": false, 00:11:33.500 "abort": true, 00:11:33.500 "seek_hole": false, 00:11:33.500 "seek_data": false, 00:11:33.500 "copy": true, 00:11:33.500 "nvme_iov_md": false 00:11:33.500 }, 00:11:33.500 "memory_domains": [ 00:11:33.500 { 00:11:33.500 "dma_device_id": "system", 00:11:33.500 "dma_device_type": 1 00:11:33.500 }, 00:11:33.500 { 00:11:33.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.500 "dma_device_type": 2 00:11:33.500 } 00:11:33.500 ], 00:11:33.500 "driver_specific": {} 00:11:33.500 } 00:11:33.500 ] 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.500 "name": "Existed_Raid", 00:11:33.500 "uuid": "323dc2d8-2057-417b-ad48-42aa5fc9d33a", 00:11:33.500 "strip_size_kb": 0, 00:11:33.500 "state": "online", 00:11:33.500 "raid_level": "raid1", 00:11:33.500 "superblock": true, 00:11:33.500 "num_base_bdevs": 4, 00:11:33.500 "num_base_bdevs_discovered": 4, 00:11:33.500 "num_base_bdevs_operational": 4, 00:11:33.500 "base_bdevs_list": [ 00:11:33.500 { 00:11:33.500 "name": "BaseBdev1", 00:11:33.500 "uuid": "37a1fd8f-2d4b-444a-a79b-3c41e1c8091d", 00:11:33.500 "is_configured": true, 00:11:33.500 "data_offset": 2048, 00:11:33.500 "data_size": 63488 00:11:33.500 }, 00:11:33.500 { 00:11:33.500 "name": "BaseBdev2", 00:11:33.500 "uuid": "888dfc09-1c44-4766-8e87-cd0ae0c1c9e4", 00:11:33.500 "is_configured": true, 00:11:33.500 "data_offset": 2048, 00:11:33.500 "data_size": 63488 00:11:33.500 }, 00:11:33.500 { 00:11:33.500 "name": "BaseBdev3", 00:11:33.500 "uuid": "9b6475ff-7839-4019-a2be-71e856dfa5dc", 00:11:33.500 "is_configured": true, 00:11:33.500 "data_offset": 2048, 00:11:33.500 "data_size": 63488 00:11:33.500 }, 00:11:33.500 { 00:11:33.500 "name": "BaseBdev4", 00:11:33.500 "uuid": "03cd8b3d-cd96-4cbe-90ce-353f06884217", 00:11:33.500 "is_configured": true, 00:11:33.500 "data_offset": 2048, 00:11:33.500 "data_size": 63488 00:11:33.500 } 00:11:33.500 ] 00:11:33.500 }' 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.500 23:45:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.076 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:34.076 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:34.076 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.076 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.076 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.076 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.076 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.076 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:34.076 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.076 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.076 [2024-12-06 23:45:45.355278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.076 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.076 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.076 "name": "Existed_Raid", 00:11:34.076 "aliases": [ 00:11:34.076 "323dc2d8-2057-417b-ad48-42aa5fc9d33a" 00:11:34.076 ], 00:11:34.076 "product_name": "Raid Volume", 00:11:34.076 "block_size": 512, 00:11:34.076 "num_blocks": 63488, 00:11:34.076 "uuid": "323dc2d8-2057-417b-ad48-42aa5fc9d33a", 00:11:34.076 "assigned_rate_limits": { 00:11:34.076 "rw_ios_per_sec": 0, 00:11:34.076 "rw_mbytes_per_sec": 0, 00:11:34.076 "r_mbytes_per_sec": 0, 00:11:34.076 "w_mbytes_per_sec": 0 00:11:34.076 }, 00:11:34.076 "claimed": false, 00:11:34.076 "zoned": false, 00:11:34.076 "supported_io_types": { 00:11:34.076 "read": true, 00:11:34.076 "write": true, 00:11:34.076 "unmap": false, 00:11:34.076 "flush": false, 00:11:34.076 "reset": true, 00:11:34.076 "nvme_admin": false, 00:11:34.076 "nvme_io": false, 00:11:34.076 "nvme_io_md": false, 00:11:34.076 "write_zeroes": true, 00:11:34.076 "zcopy": false, 00:11:34.076 "get_zone_info": false, 00:11:34.076 "zone_management": false, 00:11:34.076 "zone_append": false, 00:11:34.076 "compare": false, 00:11:34.076 "compare_and_write": false, 00:11:34.076 "abort": false, 00:11:34.076 "seek_hole": false, 00:11:34.076 "seek_data": false, 00:11:34.076 "copy": false, 00:11:34.076 "nvme_iov_md": false 00:11:34.076 }, 00:11:34.076 "memory_domains": [ 00:11:34.076 { 00:11:34.076 "dma_device_id": "system", 00:11:34.076 "dma_device_type": 1 00:11:34.076 }, 00:11:34.076 { 00:11:34.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.076 "dma_device_type": 2 00:11:34.076 }, 00:11:34.076 { 00:11:34.076 "dma_device_id": "system", 00:11:34.076 "dma_device_type": 1 00:11:34.076 }, 00:11:34.076 { 00:11:34.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.076 "dma_device_type": 2 00:11:34.076 }, 00:11:34.076 { 00:11:34.076 "dma_device_id": "system", 00:11:34.077 "dma_device_type": 1 00:11:34.077 }, 00:11:34.077 { 00:11:34.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.077 "dma_device_type": 2 00:11:34.077 }, 00:11:34.077 { 00:11:34.077 "dma_device_id": "system", 00:11:34.077 "dma_device_type": 1 00:11:34.077 }, 00:11:34.077 { 00:11:34.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.077 "dma_device_type": 2 00:11:34.077 } 00:11:34.077 ], 00:11:34.077 "driver_specific": { 00:11:34.077 "raid": { 00:11:34.077 "uuid": "323dc2d8-2057-417b-ad48-42aa5fc9d33a", 00:11:34.077 "strip_size_kb": 0, 00:11:34.077 "state": "online", 00:11:34.077 "raid_level": "raid1", 00:11:34.077 "superblock": true, 00:11:34.077 "num_base_bdevs": 4, 00:11:34.077 "num_base_bdevs_discovered": 4, 00:11:34.077 "num_base_bdevs_operational": 4, 00:11:34.077 "base_bdevs_list": [ 00:11:34.077 { 00:11:34.077 "name": "BaseBdev1", 00:11:34.077 "uuid": "37a1fd8f-2d4b-444a-a79b-3c41e1c8091d", 00:11:34.077 "is_configured": true, 00:11:34.077 "data_offset": 2048, 00:11:34.077 "data_size": 63488 00:11:34.077 }, 00:11:34.077 { 00:11:34.077 "name": "BaseBdev2", 00:11:34.077 "uuid": "888dfc09-1c44-4766-8e87-cd0ae0c1c9e4", 00:11:34.077 "is_configured": true, 00:11:34.077 "data_offset": 2048, 00:11:34.077 "data_size": 63488 00:11:34.077 }, 00:11:34.077 { 00:11:34.077 "name": "BaseBdev3", 00:11:34.077 "uuid": "9b6475ff-7839-4019-a2be-71e856dfa5dc", 00:11:34.077 "is_configured": true, 00:11:34.077 "data_offset": 2048, 00:11:34.077 "data_size": 63488 00:11:34.077 }, 00:11:34.077 { 00:11:34.077 "name": "BaseBdev4", 00:11:34.077 "uuid": "03cd8b3d-cd96-4cbe-90ce-353f06884217", 00:11:34.077 "is_configured": true, 00:11:34.077 "data_offset": 2048, 00:11:34.077 "data_size": 63488 00:11:34.077 } 00:11:34.077 ] 00:11:34.077 } 00:11:34.077 } 00:11:34.077 }' 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:34.077 BaseBdev2 00:11:34.077 BaseBdev3 00:11:34.077 BaseBdev4' 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.077 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.336 [2024-12-06 23:45:45.682444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.336 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.337 "name": "Existed_Raid", 00:11:34.337 "uuid": "323dc2d8-2057-417b-ad48-42aa5fc9d33a", 00:11:34.337 "strip_size_kb": 0, 00:11:34.337 "state": "online", 00:11:34.337 "raid_level": "raid1", 00:11:34.337 "superblock": true, 00:11:34.337 "num_base_bdevs": 4, 00:11:34.337 "num_base_bdevs_discovered": 3, 00:11:34.337 "num_base_bdevs_operational": 3, 00:11:34.337 "base_bdevs_list": [ 00:11:34.337 { 00:11:34.337 "name": null, 00:11:34.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.337 "is_configured": false, 00:11:34.337 "data_offset": 0, 00:11:34.337 "data_size": 63488 00:11:34.337 }, 00:11:34.337 { 00:11:34.337 "name": "BaseBdev2", 00:11:34.337 "uuid": "888dfc09-1c44-4766-8e87-cd0ae0c1c9e4", 00:11:34.337 "is_configured": true, 00:11:34.337 "data_offset": 2048, 00:11:34.337 "data_size": 63488 00:11:34.337 }, 00:11:34.337 { 00:11:34.337 "name": "BaseBdev3", 00:11:34.337 "uuid": "9b6475ff-7839-4019-a2be-71e856dfa5dc", 00:11:34.337 "is_configured": true, 00:11:34.337 "data_offset": 2048, 00:11:34.337 "data_size": 63488 00:11:34.337 }, 00:11:34.337 { 00:11:34.337 "name": "BaseBdev4", 00:11:34.337 "uuid": "03cd8b3d-cd96-4cbe-90ce-353f06884217", 00:11:34.337 "is_configured": true, 00:11:34.337 "data_offset": 2048, 00:11:34.337 "data_size": 63488 00:11:34.337 } 00:11:34.337 ] 00:11:34.337 }' 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.337 23:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.903 [2024-12-06 23:45:46.211605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.903 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.903 [2024-12-06 23:45:46.362040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.162 [2024-12-06 23:45:46.521122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:35.162 [2024-12-06 23:45:46.521253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.162 [2024-12-06 23:45:46.624382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.162 [2024-12-06 23:45:46.624443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.162 [2024-12-06 23:45:46.624457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:35.162 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:35.163 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:35.163 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.163 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.163 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.163 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.423 BaseBdev2 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.423 [ 00:11:35.423 { 00:11:35.423 "name": "BaseBdev2", 00:11:35.423 "aliases": [ 00:11:35.423 "19f95ce8-d2fa-411f-9afe-26b81ea128c6" 00:11:35.423 ], 00:11:35.423 "product_name": "Malloc disk", 00:11:35.423 "block_size": 512, 00:11:35.423 "num_blocks": 65536, 00:11:35.423 "uuid": "19f95ce8-d2fa-411f-9afe-26b81ea128c6", 00:11:35.423 "assigned_rate_limits": { 00:11:35.423 "rw_ios_per_sec": 0, 00:11:35.423 "rw_mbytes_per_sec": 0, 00:11:35.423 "r_mbytes_per_sec": 0, 00:11:35.423 "w_mbytes_per_sec": 0 00:11:35.423 }, 00:11:35.423 "claimed": false, 00:11:35.423 "zoned": false, 00:11:35.423 "supported_io_types": { 00:11:35.423 "read": true, 00:11:35.423 "write": true, 00:11:35.423 "unmap": true, 00:11:35.423 "flush": true, 00:11:35.423 "reset": true, 00:11:35.423 "nvme_admin": false, 00:11:35.423 "nvme_io": false, 00:11:35.423 "nvme_io_md": false, 00:11:35.423 "write_zeroes": true, 00:11:35.423 "zcopy": true, 00:11:35.423 "get_zone_info": false, 00:11:35.423 "zone_management": false, 00:11:35.423 "zone_append": false, 00:11:35.423 "compare": false, 00:11:35.423 "compare_and_write": false, 00:11:35.423 "abort": true, 00:11:35.423 "seek_hole": false, 00:11:35.423 "seek_data": false, 00:11:35.423 "copy": true, 00:11:35.423 "nvme_iov_md": false 00:11:35.423 }, 00:11:35.423 "memory_domains": [ 00:11:35.423 { 00:11:35.423 "dma_device_id": "system", 00:11:35.423 "dma_device_type": 1 00:11:35.423 }, 00:11:35.423 { 00:11:35.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.423 "dma_device_type": 2 00:11:35.423 } 00:11:35.423 ], 00:11:35.423 "driver_specific": {} 00:11:35.423 } 00:11:35.423 ] 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.423 BaseBdev3 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.423 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.423 [ 00:11:35.423 { 00:11:35.423 "name": "BaseBdev3", 00:11:35.423 "aliases": [ 00:11:35.423 "8d562ff4-582d-4cad-9e85-be0c8641b131" 00:11:35.423 ], 00:11:35.423 "product_name": "Malloc disk", 00:11:35.423 "block_size": 512, 00:11:35.423 "num_blocks": 65536, 00:11:35.423 "uuid": "8d562ff4-582d-4cad-9e85-be0c8641b131", 00:11:35.423 "assigned_rate_limits": { 00:11:35.423 "rw_ios_per_sec": 0, 00:11:35.423 "rw_mbytes_per_sec": 0, 00:11:35.423 "r_mbytes_per_sec": 0, 00:11:35.423 "w_mbytes_per_sec": 0 00:11:35.423 }, 00:11:35.423 "claimed": false, 00:11:35.424 "zoned": false, 00:11:35.424 "supported_io_types": { 00:11:35.424 "read": true, 00:11:35.424 "write": true, 00:11:35.424 "unmap": true, 00:11:35.424 "flush": true, 00:11:35.424 "reset": true, 00:11:35.424 "nvme_admin": false, 00:11:35.424 "nvme_io": false, 00:11:35.424 "nvme_io_md": false, 00:11:35.424 "write_zeroes": true, 00:11:35.424 "zcopy": true, 00:11:35.424 "get_zone_info": false, 00:11:35.424 "zone_management": false, 00:11:35.424 "zone_append": false, 00:11:35.424 "compare": false, 00:11:35.424 "compare_and_write": false, 00:11:35.424 "abort": true, 00:11:35.424 "seek_hole": false, 00:11:35.424 "seek_data": false, 00:11:35.424 "copy": true, 00:11:35.424 "nvme_iov_md": false 00:11:35.424 }, 00:11:35.424 "memory_domains": [ 00:11:35.424 { 00:11:35.424 "dma_device_id": "system", 00:11:35.424 "dma_device_type": 1 00:11:35.424 }, 00:11:35.424 { 00:11:35.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.424 "dma_device_type": 2 00:11:35.424 } 00:11:35.424 ], 00:11:35.424 "driver_specific": {} 00:11:35.424 } 00:11:35.424 ] 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.424 BaseBdev4 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.424 [ 00:11:35.424 { 00:11:35.424 "name": "BaseBdev4", 00:11:35.424 "aliases": [ 00:11:35.424 "609627d4-fd62-4b25-bfa3-c36dac2ecdad" 00:11:35.424 ], 00:11:35.424 "product_name": "Malloc disk", 00:11:35.424 "block_size": 512, 00:11:35.424 "num_blocks": 65536, 00:11:35.424 "uuid": "609627d4-fd62-4b25-bfa3-c36dac2ecdad", 00:11:35.424 "assigned_rate_limits": { 00:11:35.424 "rw_ios_per_sec": 0, 00:11:35.424 "rw_mbytes_per_sec": 0, 00:11:35.424 "r_mbytes_per_sec": 0, 00:11:35.424 "w_mbytes_per_sec": 0 00:11:35.424 }, 00:11:35.424 "claimed": false, 00:11:35.424 "zoned": false, 00:11:35.424 "supported_io_types": { 00:11:35.424 "read": true, 00:11:35.424 "write": true, 00:11:35.424 "unmap": true, 00:11:35.424 "flush": true, 00:11:35.424 "reset": true, 00:11:35.424 "nvme_admin": false, 00:11:35.424 "nvme_io": false, 00:11:35.424 "nvme_io_md": false, 00:11:35.424 "write_zeroes": true, 00:11:35.424 "zcopy": true, 00:11:35.424 "get_zone_info": false, 00:11:35.424 "zone_management": false, 00:11:35.424 "zone_append": false, 00:11:35.424 "compare": false, 00:11:35.424 "compare_and_write": false, 00:11:35.424 "abort": true, 00:11:35.424 "seek_hole": false, 00:11:35.424 "seek_data": false, 00:11:35.424 "copy": true, 00:11:35.424 "nvme_iov_md": false 00:11:35.424 }, 00:11:35.424 "memory_domains": [ 00:11:35.424 { 00:11:35.424 "dma_device_id": "system", 00:11:35.424 "dma_device_type": 1 00:11:35.424 }, 00:11:35.424 { 00:11:35.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.424 "dma_device_type": 2 00:11:35.424 } 00:11:35.424 ], 00:11:35.424 "driver_specific": {} 00:11:35.424 } 00:11:35.424 ] 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.424 [2024-12-06 23:45:46.946298] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.424 [2024-12-06 23:45:46.946430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.424 [2024-12-06 23:45:46.946472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.424 [2024-12-06 23:45:46.948608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.424 [2024-12-06 23:45:46.948719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.424 23:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.684 23:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.684 "name": "Existed_Raid", 00:11:35.684 "uuid": "0e85bd47-55dd-45ab-ba5e-fc8af4132aea", 00:11:35.684 "strip_size_kb": 0, 00:11:35.684 "state": "configuring", 00:11:35.684 "raid_level": "raid1", 00:11:35.684 "superblock": true, 00:11:35.684 "num_base_bdevs": 4, 00:11:35.684 "num_base_bdevs_discovered": 3, 00:11:35.684 "num_base_bdevs_operational": 4, 00:11:35.684 "base_bdevs_list": [ 00:11:35.684 { 00:11:35.684 "name": "BaseBdev1", 00:11:35.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.684 "is_configured": false, 00:11:35.684 "data_offset": 0, 00:11:35.684 "data_size": 0 00:11:35.684 }, 00:11:35.684 { 00:11:35.684 "name": "BaseBdev2", 00:11:35.684 "uuid": "19f95ce8-d2fa-411f-9afe-26b81ea128c6", 00:11:35.684 "is_configured": true, 00:11:35.684 "data_offset": 2048, 00:11:35.684 "data_size": 63488 00:11:35.684 }, 00:11:35.684 { 00:11:35.684 "name": "BaseBdev3", 00:11:35.684 "uuid": "8d562ff4-582d-4cad-9e85-be0c8641b131", 00:11:35.684 "is_configured": true, 00:11:35.684 "data_offset": 2048, 00:11:35.684 "data_size": 63488 00:11:35.684 }, 00:11:35.684 { 00:11:35.684 "name": "BaseBdev4", 00:11:35.684 "uuid": "609627d4-fd62-4b25-bfa3-c36dac2ecdad", 00:11:35.684 "is_configured": true, 00:11:35.684 "data_offset": 2048, 00:11:35.684 "data_size": 63488 00:11:35.684 } 00:11:35.684 ] 00:11:35.684 }' 00:11:35.684 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.684 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.944 [2024-12-06 23:45:47.357721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.944 "name": "Existed_Raid", 00:11:35.944 "uuid": "0e85bd47-55dd-45ab-ba5e-fc8af4132aea", 00:11:35.944 "strip_size_kb": 0, 00:11:35.944 "state": "configuring", 00:11:35.944 "raid_level": "raid1", 00:11:35.944 "superblock": true, 00:11:35.944 "num_base_bdevs": 4, 00:11:35.944 "num_base_bdevs_discovered": 2, 00:11:35.944 "num_base_bdevs_operational": 4, 00:11:35.944 "base_bdevs_list": [ 00:11:35.944 { 00:11:35.944 "name": "BaseBdev1", 00:11:35.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.944 "is_configured": false, 00:11:35.944 "data_offset": 0, 00:11:35.944 "data_size": 0 00:11:35.944 }, 00:11:35.944 { 00:11:35.944 "name": null, 00:11:35.944 "uuid": "19f95ce8-d2fa-411f-9afe-26b81ea128c6", 00:11:35.944 "is_configured": false, 00:11:35.944 "data_offset": 0, 00:11:35.944 "data_size": 63488 00:11:35.944 }, 00:11:35.944 { 00:11:35.944 "name": "BaseBdev3", 00:11:35.944 "uuid": "8d562ff4-582d-4cad-9e85-be0c8641b131", 00:11:35.944 "is_configured": true, 00:11:35.944 "data_offset": 2048, 00:11:35.944 "data_size": 63488 00:11:35.944 }, 00:11:35.944 { 00:11:35.944 "name": "BaseBdev4", 00:11:35.944 "uuid": "609627d4-fd62-4b25-bfa3-c36dac2ecdad", 00:11:35.944 "is_configured": true, 00:11:35.944 "data_offset": 2048, 00:11:35.944 "data_size": 63488 00:11:35.944 } 00:11:35.944 ] 00:11:35.944 }' 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.944 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.513 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.514 [2024-12-06 23:45:47.871876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.514 BaseBdev1 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.514 [ 00:11:36.514 { 00:11:36.514 "name": "BaseBdev1", 00:11:36.514 "aliases": [ 00:11:36.514 "6121cccc-437d-44af-ad0c-103530aa321f" 00:11:36.514 ], 00:11:36.514 "product_name": "Malloc disk", 00:11:36.514 "block_size": 512, 00:11:36.514 "num_blocks": 65536, 00:11:36.514 "uuid": "6121cccc-437d-44af-ad0c-103530aa321f", 00:11:36.514 "assigned_rate_limits": { 00:11:36.514 "rw_ios_per_sec": 0, 00:11:36.514 "rw_mbytes_per_sec": 0, 00:11:36.514 "r_mbytes_per_sec": 0, 00:11:36.514 "w_mbytes_per_sec": 0 00:11:36.514 }, 00:11:36.514 "claimed": true, 00:11:36.514 "claim_type": "exclusive_write", 00:11:36.514 "zoned": false, 00:11:36.514 "supported_io_types": { 00:11:36.514 "read": true, 00:11:36.514 "write": true, 00:11:36.514 "unmap": true, 00:11:36.514 "flush": true, 00:11:36.514 "reset": true, 00:11:36.514 "nvme_admin": false, 00:11:36.514 "nvme_io": false, 00:11:36.514 "nvme_io_md": false, 00:11:36.514 "write_zeroes": true, 00:11:36.514 "zcopy": true, 00:11:36.514 "get_zone_info": false, 00:11:36.514 "zone_management": false, 00:11:36.514 "zone_append": false, 00:11:36.514 "compare": false, 00:11:36.514 "compare_and_write": false, 00:11:36.514 "abort": true, 00:11:36.514 "seek_hole": false, 00:11:36.514 "seek_data": false, 00:11:36.514 "copy": true, 00:11:36.514 "nvme_iov_md": false 00:11:36.514 }, 00:11:36.514 "memory_domains": [ 00:11:36.514 { 00:11:36.514 "dma_device_id": "system", 00:11:36.514 "dma_device_type": 1 00:11:36.514 }, 00:11:36.514 { 00:11:36.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.514 "dma_device_type": 2 00:11:36.514 } 00:11:36.514 ], 00:11:36.514 "driver_specific": {} 00:11:36.514 } 00:11:36.514 ] 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.514 "name": "Existed_Raid", 00:11:36.514 "uuid": "0e85bd47-55dd-45ab-ba5e-fc8af4132aea", 00:11:36.514 "strip_size_kb": 0, 00:11:36.514 "state": "configuring", 00:11:36.514 "raid_level": "raid1", 00:11:36.514 "superblock": true, 00:11:36.514 "num_base_bdevs": 4, 00:11:36.514 "num_base_bdevs_discovered": 3, 00:11:36.514 "num_base_bdevs_operational": 4, 00:11:36.514 "base_bdevs_list": [ 00:11:36.514 { 00:11:36.514 "name": "BaseBdev1", 00:11:36.514 "uuid": "6121cccc-437d-44af-ad0c-103530aa321f", 00:11:36.514 "is_configured": true, 00:11:36.514 "data_offset": 2048, 00:11:36.514 "data_size": 63488 00:11:36.514 }, 00:11:36.514 { 00:11:36.514 "name": null, 00:11:36.514 "uuid": "19f95ce8-d2fa-411f-9afe-26b81ea128c6", 00:11:36.514 "is_configured": false, 00:11:36.514 "data_offset": 0, 00:11:36.514 "data_size": 63488 00:11:36.514 }, 00:11:36.514 { 00:11:36.514 "name": "BaseBdev3", 00:11:36.514 "uuid": "8d562ff4-582d-4cad-9e85-be0c8641b131", 00:11:36.514 "is_configured": true, 00:11:36.514 "data_offset": 2048, 00:11:36.514 "data_size": 63488 00:11:36.514 }, 00:11:36.514 { 00:11:36.514 "name": "BaseBdev4", 00:11:36.514 "uuid": "609627d4-fd62-4b25-bfa3-c36dac2ecdad", 00:11:36.514 "is_configured": true, 00:11:36.514 "data_offset": 2048, 00:11:36.514 "data_size": 63488 00:11:36.514 } 00:11:36.514 ] 00:11:36.514 }' 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.514 23:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.083 [2024-12-06 23:45:48.399165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.083 "name": "Existed_Raid", 00:11:37.083 "uuid": "0e85bd47-55dd-45ab-ba5e-fc8af4132aea", 00:11:37.083 "strip_size_kb": 0, 00:11:37.083 "state": "configuring", 00:11:37.083 "raid_level": "raid1", 00:11:37.083 "superblock": true, 00:11:37.083 "num_base_bdevs": 4, 00:11:37.083 "num_base_bdevs_discovered": 2, 00:11:37.083 "num_base_bdevs_operational": 4, 00:11:37.083 "base_bdevs_list": [ 00:11:37.083 { 00:11:37.083 "name": "BaseBdev1", 00:11:37.083 "uuid": "6121cccc-437d-44af-ad0c-103530aa321f", 00:11:37.083 "is_configured": true, 00:11:37.083 "data_offset": 2048, 00:11:37.083 "data_size": 63488 00:11:37.083 }, 00:11:37.083 { 00:11:37.083 "name": null, 00:11:37.083 "uuid": "19f95ce8-d2fa-411f-9afe-26b81ea128c6", 00:11:37.083 "is_configured": false, 00:11:37.083 "data_offset": 0, 00:11:37.083 "data_size": 63488 00:11:37.083 }, 00:11:37.083 { 00:11:37.083 "name": null, 00:11:37.083 "uuid": "8d562ff4-582d-4cad-9e85-be0c8641b131", 00:11:37.083 "is_configured": false, 00:11:37.083 "data_offset": 0, 00:11:37.083 "data_size": 63488 00:11:37.083 }, 00:11:37.083 { 00:11:37.083 "name": "BaseBdev4", 00:11:37.083 "uuid": "609627d4-fd62-4b25-bfa3-c36dac2ecdad", 00:11:37.083 "is_configured": true, 00:11:37.083 "data_offset": 2048, 00:11:37.083 "data_size": 63488 00:11:37.083 } 00:11:37.083 ] 00:11:37.083 }' 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.083 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.343 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.343 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:37.343 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.343 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.343 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.343 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:37.343 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:37.343 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.343 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.604 [2024-12-06 23:45:48.910312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.604 "name": "Existed_Raid", 00:11:37.604 "uuid": "0e85bd47-55dd-45ab-ba5e-fc8af4132aea", 00:11:37.604 "strip_size_kb": 0, 00:11:37.604 "state": "configuring", 00:11:37.604 "raid_level": "raid1", 00:11:37.604 "superblock": true, 00:11:37.604 "num_base_bdevs": 4, 00:11:37.604 "num_base_bdevs_discovered": 3, 00:11:37.604 "num_base_bdevs_operational": 4, 00:11:37.604 "base_bdevs_list": [ 00:11:37.604 { 00:11:37.604 "name": "BaseBdev1", 00:11:37.604 "uuid": "6121cccc-437d-44af-ad0c-103530aa321f", 00:11:37.604 "is_configured": true, 00:11:37.604 "data_offset": 2048, 00:11:37.604 "data_size": 63488 00:11:37.604 }, 00:11:37.604 { 00:11:37.604 "name": null, 00:11:37.604 "uuid": "19f95ce8-d2fa-411f-9afe-26b81ea128c6", 00:11:37.604 "is_configured": false, 00:11:37.604 "data_offset": 0, 00:11:37.604 "data_size": 63488 00:11:37.604 }, 00:11:37.604 { 00:11:37.604 "name": "BaseBdev3", 00:11:37.604 "uuid": "8d562ff4-582d-4cad-9e85-be0c8641b131", 00:11:37.604 "is_configured": true, 00:11:37.604 "data_offset": 2048, 00:11:37.604 "data_size": 63488 00:11:37.604 }, 00:11:37.604 { 00:11:37.604 "name": "BaseBdev4", 00:11:37.604 "uuid": "609627d4-fd62-4b25-bfa3-c36dac2ecdad", 00:11:37.604 "is_configured": true, 00:11:37.604 "data_offset": 2048, 00:11:37.604 "data_size": 63488 00:11:37.604 } 00:11:37.604 ] 00:11:37.604 }' 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.604 23:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.864 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.864 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:37.864 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.864 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.864 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.864 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:37.864 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:37.864 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.864 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.864 [2024-12-06 23:45:49.405454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:38.124 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.124 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.124 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.124 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.125 "name": "Existed_Raid", 00:11:38.125 "uuid": "0e85bd47-55dd-45ab-ba5e-fc8af4132aea", 00:11:38.125 "strip_size_kb": 0, 00:11:38.125 "state": "configuring", 00:11:38.125 "raid_level": "raid1", 00:11:38.125 "superblock": true, 00:11:38.125 "num_base_bdevs": 4, 00:11:38.125 "num_base_bdevs_discovered": 2, 00:11:38.125 "num_base_bdevs_operational": 4, 00:11:38.125 "base_bdevs_list": [ 00:11:38.125 { 00:11:38.125 "name": null, 00:11:38.125 "uuid": "6121cccc-437d-44af-ad0c-103530aa321f", 00:11:38.125 "is_configured": false, 00:11:38.125 "data_offset": 0, 00:11:38.125 "data_size": 63488 00:11:38.125 }, 00:11:38.125 { 00:11:38.125 "name": null, 00:11:38.125 "uuid": "19f95ce8-d2fa-411f-9afe-26b81ea128c6", 00:11:38.125 "is_configured": false, 00:11:38.125 "data_offset": 0, 00:11:38.125 "data_size": 63488 00:11:38.125 }, 00:11:38.125 { 00:11:38.125 "name": "BaseBdev3", 00:11:38.125 "uuid": "8d562ff4-582d-4cad-9e85-be0c8641b131", 00:11:38.125 "is_configured": true, 00:11:38.125 "data_offset": 2048, 00:11:38.125 "data_size": 63488 00:11:38.125 }, 00:11:38.125 { 00:11:38.125 "name": "BaseBdev4", 00:11:38.125 "uuid": "609627d4-fd62-4b25-bfa3-c36dac2ecdad", 00:11:38.125 "is_configured": true, 00:11:38.125 "data_offset": 2048, 00:11:38.125 "data_size": 63488 00:11:38.125 } 00:11:38.125 ] 00:11:38.125 }' 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.125 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.694 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.694 23:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:38.694 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.694 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.694 23:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.694 [2024-12-06 23:45:50.014892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.694 "name": "Existed_Raid", 00:11:38.694 "uuid": "0e85bd47-55dd-45ab-ba5e-fc8af4132aea", 00:11:38.694 "strip_size_kb": 0, 00:11:38.694 "state": "configuring", 00:11:38.694 "raid_level": "raid1", 00:11:38.694 "superblock": true, 00:11:38.694 "num_base_bdevs": 4, 00:11:38.694 "num_base_bdevs_discovered": 3, 00:11:38.694 "num_base_bdevs_operational": 4, 00:11:38.694 "base_bdevs_list": [ 00:11:38.694 { 00:11:38.694 "name": null, 00:11:38.694 "uuid": "6121cccc-437d-44af-ad0c-103530aa321f", 00:11:38.694 "is_configured": false, 00:11:38.694 "data_offset": 0, 00:11:38.694 "data_size": 63488 00:11:38.694 }, 00:11:38.694 { 00:11:38.694 "name": "BaseBdev2", 00:11:38.694 "uuid": "19f95ce8-d2fa-411f-9afe-26b81ea128c6", 00:11:38.694 "is_configured": true, 00:11:38.694 "data_offset": 2048, 00:11:38.694 "data_size": 63488 00:11:38.694 }, 00:11:38.694 { 00:11:38.694 "name": "BaseBdev3", 00:11:38.694 "uuid": "8d562ff4-582d-4cad-9e85-be0c8641b131", 00:11:38.694 "is_configured": true, 00:11:38.694 "data_offset": 2048, 00:11:38.694 "data_size": 63488 00:11:38.694 }, 00:11:38.694 { 00:11:38.694 "name": "BaseBdev4", 00:11:38.694 "uuid": "609627d4-fd62-4b25-bfa3-c36dac2ecdad", 00:11:38.694 "is_configured": true, 00:11:38.694 "data_offset": 2048, 00:11:38.694 "data_size": 63488 00:11:38.694 } 00:11:38.694 ] 00:11:38.694 }' 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.694 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6121cccc-437d-44af-ad0c-103530aa321f 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.953 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.213 [2024-12-06 23:45:50.532833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:39.213 [2024-12-06 23:45:50.533192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:39.213 [2024-12-06 23:45:50.533217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:39.213 [2024-12-06 23:45:50.533527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:39.213 [2024-12-06 23:45:50.533714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:39.213 [2024-12-06 23:45:50.533725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:39.213 NewBaseBdev 00:11:39.213 [2024-12-06 23:45:50.533883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.213 [ 00:11:39.213 { 00:11:39.213 "name": "NewBaseBdev", 00:11:39.213 "aliases": [ 00:11:39.213 "6121cccc-437d-44af-ad0c-103530aa321f" 00:11:39.213 ], 00:11:39.213 "product_name": "Malloc disk", 00:11:39.213 "block_size": 512, 00:11:39.213 "num_blocks": 65536, 00:11:39.213 "uuid": "6121cccc-437d-44af-ad0c-103530aa321f", 00:11:39.213 "assigned_rate_limits": { 00:11:39.213 "rw_ios_per_sec": 0, 00:11:39.213 "rw_mbytes_per_sec": 0, 00:11:39.213 "r_mbytes_per_sec": 0, 00:11:39.213 "w_mbytes_per_sec": 0 00:11:39.213 }, 00:11:39.213 "claimed": true, 00:11:39.213 "claim_type": "exclusive_write", 00:11:39.213 "zoned": false, 00:11:39.213 "supported_io_types": { 00:11:39.213 "read": true, 00:11:39.213 "write": true, 00:11:39.213 "unmap": true, 00:11:39.213 "flush": true, 00:11:39.213 "reset": true, 00:11:39.213 "nvme_admin": false, 00:11:39.213 "nvme_io": false, 00:11:39.213 "nvme_io_md": false, 00:11:39.213 "write_zeroes": true, 00:11:39.213 "zcopy": true, 00:11:39.213 "get_zone_info": false, 00:11:39.213 "zone_management": false, 00:11:39.213 "zone_append": false, 00:11:39.213 "compare": false, 00:11:39.213 "compare_and_write": false, 00:11:39.213 "abort": true, 00:11:39.213 "seek_hole": false, 00:11:39.213 "seek_data": false, 00:11:39.213 "copy": true, 00:11:39.213 "nvme_iov_md": false 00:11:39.213 }, 00:11:39.213 "memory_domains": [ 00:11:39.213 { 00:11:39.213 "dma_device_id": "system", 00:11:39.213 "dma_device_type": 1 00:11:39.213 }, 00:11:39.213 { 00:11:39.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.213 "dma_device_type": 2 00:11:39.213 } 00:11:39.213 ], 00:11:39.213 "driver_specific": {} 00:11:39.213 } 00:11:39.213 ] 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:39.213 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.214 "name": "Existed_Raid", 00:11:39.214 "uuid": "0e85bd47-55dd-45ab-ba5e-fc8af4132aea", 00:11:39.214 "strip_size_kb": 0, 00:11:39.214 "state": "online", 00:11:39.214 "raid_level": "raid1", 00:11:39.214 "superblock": true, 00:11:39.214 "num_base_bdevs": 4, 00:11:39.214 "num_base_bdevs_discovered": 4, 00:11:39.214 "num_base_bdevs_operational": 4, 00:11:39.214 "base_bdevs_list": [ 00:11:39.214 { 00:11:39.214 "name": "NewBaseBdev", 00:11:39.214 "uuid": "6121cccc-437d-44af-ad0c-103530aa321f", 00:11:39.214 "is_configured": true, 00:11:39.214 "data_offset": 2048, 00:11:39.214 "data_size": 63488 00:11:39.214 }, 00:11:39.214 { 00:11:39.214 "name": "BaseBdev2", 00:11:39.214 "uuid": "19f95ce8-d2fa-411f-9afe-26b81ea128c6", 00:11:39.214 "is_configured": true, 00:11:39.214 "data_offset": 2048, 00:11:39.214 "data_size": 63488 00:11:39.214 }, 00:11:39.214 { 00:11:39.214 "name": "BaseBdev3", 00:11:39.214 "uuid": "8d562ff4-582d-4cad-9e85-be0c8641b131", 00:11:39.214 "is_configured": true, 00:11:39.214 "data_offset": 2048, 00:11:39.214 "data_size": 63488 00:11:39.214 }, 00:11:39.214 { 00:11:39.214 "name": "BaseBdev4", 00:11:39.214 "uuid": "609627d4-fd62-4b25-bfa3-c36dac2ecdad", 00:11:39.214 "is_configured": true, 00:11:39.214 "data_offset": 2048, 00:11:39.214 "data_size": 63488 00:11:39.214 } 00:11:39.214 ] 00:11:39.214 }' 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.214 23:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:39.782 [2024-12-06 23:45:51.052390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:39.782 "name": "Existed_Raid", 00:11:39.782 "aliases": [ 00:11:39.782 "0e85bd47-55dd-45ab-ba5e-fc8af4132aea" 00:11:39.782 ], 00:11:39.782 "product_name": "Raid Volume", 00:11:39.782 "block_size": 512, 00:11:39.782 "num_blocks": 63488, 00:11:39.782 "uuid": "0e85bd47-55dd-45ab-ba5e-fc8af4132aea", 00:11:39.782 "assigned_rate_limits": { 00:11:39.782 "rw_ios_per_sec": 0, 00:11:39.782 "rw_mbytes_per_sec": 0, 00:11:39.782 "r_mbytes_per_sec": 0, 00:11:39.782 "w_mbytes_per_sec": 0 00:11:39.782 }, 00:11:39.782 "claimed": false, 00:11:39.782 "zoned": false, 00:11:39.782 "supported_io_types": { 00:11:39.782 "read": true, 00:11:39.782 "write": true, 00:11:39.782 "unmap": false, 00:11:39.782 "flush": false, 00:11:39.782 "reset": true, 00:11:39.782 "nvme_admin": false, 00:11:39.782 "nvme_io": false, 00:11:39.782 "nvme_io_md": false, 00:11:39.782 "write_zeroes": true, 00:11:39.782 "zcopy": false, 00:11:39.782 "get_zone_info": false, 00:11:39.782 "zone_management": false, 00:11:39.782 "zone_append": false, 00:11:39.782 "compare": false, 00:11:39.782 "compare_and_write": false, 00:11:39.782 "abort": false, 00:11:39.782 "seek_hole": false, 00:11:39.782 "seek_data": false, 00:11:39.782 "copy": false, 00:11:39.782 "nvme_iov_md": false 00:11:39.782 }, 00:11:39.782 "memory_domains": [ 00:11:39.782 { 00:11:39.782 "dma_device_id": "system", 00:11:39.782 "dma_device_type": 1 00:11:39.782 }, 00:11:39.782 { 00:11:39.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.782 "dma_device_type": 2 00:11:39.782 }, 00:11:39.782 { 00:11:39.782 "dma_device_id": "system", 00:11:39.782 "dma_device_type": 1 00:11:39.782 }, 00:11:39.782 { 00:11:39.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.782 "dma_device_type": 2 00:11:39.782 }, 00:11:39.782 { 00:11:39.782 "dma_device_id": "system", 00:11:39.782 "dma_device_type": 1 00:11:39.782 }, 00:11:39.782 { 00:11:39.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.782 "dma_device_type": 2 00:11:39.782 }, 00:11:39.782 { 00:11:39.782 "dma_device_id": "system", 00:11:39.782 "dma_device_type": 1 00:11:39.782 }, 00:11:39.782 { 00:11:39.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.782 "dma_device_type": 2 00:11:39.782 } 00:11:39.782 ], 00:11:39.782 "driver_specific": { 00:11:39.782 "raid": { 00:11:39.782 "uuid": "0e85bd47-55dd-45ab-ba5e-fc8af4132aea", 00:11:39.782 "strip_size_kb": 0, 00:11:39.782 "state": "online", 00:11:39.782 "raid_level": "raid1", 00:11:39.782 "superblock": true, 00:11:39.782 "num_base_bdevs": 4, 00:11:39.782 "num_base_bdevs_discovered": 4, 00:11:39.782 "num_base_bdevs_operational": 4, 00:11:39.782 "base_bdevs_list": [ 00:11:39.782 { 00:11:39.782 "name": "NewBaseBdev", 00:11:39.782 "uuid": "6121cccc-437d-44af-ad0c-103530aa321f", 00:11:39.782 "is_configured": true, 00:11:39.782 "data_offset": 2048, 00:11:39.782 "data_size": 63488 00:11:39.782 }, 00:11:39.782 { 00:11:39.782 "name": "BaseBdev2", 00:11:39.782 "uuid": "19f95ce8-d2fa-411f-9afe-26b81ea128c6", 00:11:39.782 "is_configured": true, 00:11:39.782 "data_offset": 2048, 00:11:39.782 "data_size": 63488 00:11:39.782 }, 00:11:39.782 { 00:11:39.782 "name": "BaseBdev3", 00:11:39.782 "uuid": "8d562ff4-582d-4cad-9e85-be0c8641b131", 00:11:39.782 "is_configured": true, 00:11:39.782 "data_offset": 2048, 00:11:39.782 "data_size": 63488 00:11:39.782 }, 00:11:39.782 { 00:11:39.782 "name": "BaseBdev4", 00:11:39.782 "uuid": "609627d4-fd62-4b25-bfa3-c36dac2ecdad", 00:11:39.782 "is_configured": true, 00:11:39.782 "data_offset": 2048, 00:11:39.782 "data_size": 63488 00:11:39.782 } 00:11:39.782 ] 00:11:39.782 } 00:11:39.782 } 00:11:39.782 }' 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:39.782 BaseBdev2 00:11:39.782 BaseBdev3 00:11:39.782 BaseBdev4' 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.782 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.783 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.042 [2024-12-06 23:45:51.355479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.042 [2024-12-06 23:45:51.355603] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.042 [2024-12-06 23:45:51.355733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.042 [2024-12-06 23:45:51.356077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.042 [2024-12-06 23:45:51.356093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73779 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73779 ']' 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73779 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73779 00:11:40.042 killing process with pid 73779 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73779' 00:11:40.042 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73779 00:11:40.042 [2024-12-06 23:45:51.404632] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.043 23:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73779 00:11:40.302 [2024-12-06 23:45:51.839549] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.683 23:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:41.683 00:11:41.683 real 0m11.636s 00:11:41.683 user 0m18.132s 00:11:41.683 sys 0m2.165s 00:11:41.683 23:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.683 ************************************ 00:11:41.683 END TEST raid_state_function_test_sb 00:11:41.683 ************************************ 00:11:41.683 23:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.683 23:45:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:41.683 23:45:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:41.683 23:45:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.683 23:45:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.683 ************************************ 00:11:41.683 START TEST raid_superblock_test 00:11:41.683 ************************************ 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74449 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74449 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74449 ']' 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.683 23:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.943 [2024-12-06 23:45:53.268579] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:11:41.943 [2024-12-06 23:45:53.268803] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74449 ] 00:11:41.943 [2024-12-06 23:45:53.443547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.206 [2024-12-06 23:45:53.586232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.472 [2024-12-06 23:45:53.831812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.472 [2024-12-06 23:45:53.831882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.734 malloc1 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.734 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.734 [2024-12-06 23:45:54.176464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:42.734 [2024-12-06 23:45:54.176606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.734 [2024-12-06 23:45:54.176650] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:42.734 [2024-12-06 23:45:54.176694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.735 [2024-12-06 23:45:54.179127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.735 [2024-12-06 23:45:54.179203] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:42.735 pt1 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.735 malloc2 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.735 [2024-12-06 23:45:54.242741] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:42.735 [2024-12-06 23:45:54.242803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.735 [2024-12-06 23:45:54.242831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:42.735 [2024-12-06 23:45:54.242841] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.735 [2024-12-06 23:45:54.245289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.735 [2024-12-06 23:45:54.245326] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:42.735 pt2 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.735 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.996 malloc3 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.996 [2024-12-06 23:45:54.317255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:42.996 [2024-12-06 23:45:54.317395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.996 [2024-12-06 23:45:54.317437] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:42.996 [2024-12-06 23:45:54.317468] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.996 [2024-12-06 23:45:54.319966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.996 [2024-12-06 23:45:54.320042] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:42.996 pt3 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.996 malloc4 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.996 [2024-12-06 23:45:54.383516] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:42.996 [2024-12-06 23:45:54.383647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.996 [2024-12-06 23:45:54.383703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:42.996 [2024-12-06 23:45:54.383733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.996 [2024-12-06 23:45:54.386072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.996 [2024-12-06 23:45:54.386141] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:42.996 pt4 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.996 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.996 [2024-12-06 23:45:54.395548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:42.996 [2024-12-06 23:45:54.397760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:42.996 [2024-12-06 23:45:54.397868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:42.996 [2024-12-06 23:45:54.397956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:42.996 [2024-12-06 23:45:54.398201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:42.996 [2024-12-06 23:45:54.398252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:42.996 [2024-12-06 23:45:54.398548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:42.996 [2024-12-06 23:45:54.398797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:42.996 [2024-12-06 23:45:54.398848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:42.997 [2024-12-06 23:45:54.399074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.997 "name": "raid_bdev1", 00:11:42.997 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:42.997 "strip_size_kb": 0, 00:11:42.997 "state": "online", 00:11:42.997 "raid_level": "raid1", 00:11:42.997 "superblock": true, 00:11:42.997 "num_base_bdevs": 4, 00:11:42.997 "num_base_bdevs_discovered": 4, 00:11:42.997 "num_base_bdevs_operational": 4, 00:11:42.997 "base_bdevs_list": [ 00:11:42.997 { 00:11:42.997 "name": "pt1", 00:11:42.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:42.997 "is_configured": true, 00:11:42.997 "data_offset": 2048, 00:11:42.997 "data_size": 63488 00:11:42.997 }, 00:11:42.997 { 00:11:42.997 "name": "pt2", 00:11:42.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.997 "is_configured": true, 00:11:42.997 "data_offset": 2048, 00:11:42.997 "data_size": 63488 00:11:42.997 }, 00:11:42.997 { 00:11:42.997 "name": "pt3", 00:11:42.997 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.997 "is_configured": true, 00:11:42.997 "data_offset": 2048, 00:11:42.997 "data_size": 63488 00:11:42.997 }, 00:11:42.997 { 00:11:42.997 "name": "pt4", 00:11:42.997 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:42.997 "is_configured": true, 00:11:42.997 "data_offset": 2048, 00:11:42.997 "data_size": 63488 00:11:42.997 } 00:11:42.997 ] 00:11:42.997 }' 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.997 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.565 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:43.565 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:43.565 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:43.565 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:43.565 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:43.565 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:43.565 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.565 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:43.565 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.565 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.565 [2024-12-06 23:45:54.839184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.565 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.565 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:43.565 "name": "raid_bdev1", 00:11:43.565 "aliases": [ 00:11:43.565 "0e74584f-0b45-4b39-8eff-3ef0b0e92c29" 00:11:43.565 ], 00:11:43.565 "product_name": "Raid Volume", 00:11:43.565 "block_size": 512, 00:11:43.565 "num_blocks": 63488, 00:11:43.565 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:43.565 "assigned_rate_limits": { 00:11:43.565 "rw_ios_per_sec": 0, 00:11:43.565 "rw_mbytes_per_sec": 0, 00:11:43.565 "r_mbytes_per_sec": 0, 00:11:43.565 "w_mbytes_per_sec": 0 00:11:43.565 }, 00:11:43.565 "claimed": false, 00:11:43.565 "zoned": false, 00:11:43.565 "supported_io_types": { 00:11:43.565 "read": true, 00:11:43.565 "write": true, 00:11:43.565 "unmap": false, 00:11:43.565 "flush": false, 00:11:43.565 "reset": true, 00:11:43.565 "nvme_admin": false, 00:11:43.565 "nvme_io": false, 00:11:43.565 "nvme_io_md": false, 00:11:43.565 "write_zeroes": true, 00:11:43.565 "zcopy": false, 00:11:43.565 "get_zone_info": false, 00:11:43.565 "zone_management": false, 00:11:43.565 "zone_append": false, 00:11:43.565 "compare": false, 00:11:43.565 "compare_and_write": false, 00:11:43.565 "abort": false, 00:11:43.566 "seek_hole": false, 00:11:43.566 "seek_data": false, 00:11:43.566 "copy": false, 00:11:43.566 "nvme_iov_md": false 00:11:43.566 }, 00:11:43.566 "memory_domains": [ 00:11:43.566 { 00:11:43.566 "dma_device_id": "system", 00:11:43.566 "dma_device_type": 1 00:11:43.566 }, 00:11:43.566 { 00:11:43.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.566 "dma_device_type": 2 00:11:43.566 }, 00:11:43.566 { 00:11:43.566 "dma_device_id": "system", 00:11:43.566 "dma_device_type": 1 00:11:43.566 }, 00:11:43.566 { 00:11:43.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.566 "dma_device_type": 2 00:11:43.566 }, 00:11:43.566 { 00:11:43.566 "dma_device_id": "system", 00:11:43.566 "dma_device_type": 1 00:11:43.566 }, 00:11:43.566 { 00:11:43.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.566 "dma_device_type": 2 00:11:43.566 }, 00:11:43.566 { 00:11:43.566 "dma_device_id": "system", 00:11:43.566 "dma_device_type": 1 00:11:43.566 }, 00:11:43.566 { 00:11:43.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.566 "dma_device_type": 2 00:11:43.566 } 00:11:43.566 ], 00:11:43.566 "driver_specific": { 00:11:43.566 "raid": { 00:11:43.566 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:43.566 "strip_size_kb": 0, 00:11:43.566 "state": "online", 00:11:43.566 "raid_level": "raid1", 00:11:43.566 "superblock": true, 00:11:43.566 "num_base_bdevs": 4, 00:11:43.566 "num_base_bdevs_discovered": 4, 00:11:43.566 "num_base_bdevs_operational": 4, 00:11:43.566 "base_bdevs_list": [ 00:11:43.566 { 00:11:43.566 "name": "pt1", 00:11:43.566 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:43.566 "is_configured": true, 00:11:43.566 "data_offset": 2048, 00:11:43.566 "data_size": 63488 00:11:43.566 }, 00:11:43.566 { 00:11:43.566 "name": "pt2", 00:11:43.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.566 "is_configured": true, 00:11:43.566 "data_offset": 2048, 00:11:43.566 "data_size": 63488 00:11:43.566 }, 00:11:43.566 { 00:11:43.566 "name": "pt3", 00:11:43.566 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.566 "is_configured": true, 00:11:43.566 "data_offset": 2048, 00:11:43.566 "data_size": 63488 00:11:43.566 }, 00:11:43.566 { 00:11:43.566 "name": "pt4", 00:11:43.566 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:43.566 "is_configured": true, 00:11:43.566 "data_offset": 2048, 00:11:43.566 "data_size": 63488 00:11:43.566 } 00:11:43.566 ] 00:11:43.566 } 00:11:43.566 } 00:11:43.566 }' 00:11:43.566 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.566 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:43.566 pt2 00:11:43.566 pt3 00:11:43.566 pt4' 00:11:43.566 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.566 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.566 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.566 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:43.566 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.566 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.566 23:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.566 23:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.566 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:43.825 [2024-12-06 23:45:55.186579] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0e74584f-0b45-4b39-8eff-3ef0b0e92c29 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0e74584f-0b45-4b39-8eff-3ef0b0e92c29 ']' 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.825 [2024-12-06 23:45:55.238086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.825 [2024-12-06 23:45:55.238139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.825 [2024-12-06 23:45:55.238247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.825 [2024-12-06 23:45:55.238344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.825 [2024-12-06 23:45:55.238369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.825 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.826 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.084 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.084 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:44.084 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:44.084 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:44.084 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:44.084 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:44.084 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.084 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:44.084 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.084 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:44.084 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.084 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.084 [2024-12-06 23:45:55.405793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:44.084 [2024-12-06 23:45:55.408085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:44.084 [2024-12-06 23:45:55.408141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:44.084 [2024-12-06 23:45:55.408178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:44.085 [2024-12-06 23:45:55.408233] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:44.085 [2024-12-06 23:45:55.408296] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:44.085 [2024-12-06 23:45:55.408315] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:44.085 [2024-12-06 23:45:55.408335] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:44.085 [2024-12-06 23:45:55.408349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:44.085 [2024-12-06 23:45:55.408361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:44.085 request: 00:11:44.085 { 00:11:44.085 "name": "raid_bdev1", 00:11:44.085 "raid_level": "raid1", 00:11:44.085 "base_bdevs": [ 00:11:44.085 "malloc1", 00:11:44.085 "malloc2", 00:11:44.085 "malloc3", 00:11:44.085 "malloc4" 00:11:44.085 ], 00:11:44.085 "superblock": false, 00:11:44.085 "method": "bdev_raid_create", 00:11:44.085 "req_id": 1 00:11:44.085 } 00:11:44.085 Got JSON-RPC error response 00:11:44.085 response: 00:11:44.085 { 00:11:44.085 "code": -17, 00:11:44.085 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:44.085 } 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.085 [2024-12-06 23:45:55.473641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:44.085 [2024-12-06 23:45:55.473775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.085 [2024-12-06 23:45:55.473811] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:44.085 [2024-12-06 23:45:55.473845] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.085 [2024-12-06 23:45:55.476591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.085 [2024-12-06 23:45:55.476711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:44.085 [2024-12-06 23:45:55.476838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:44.085 [2024-12-06 23:45:55.476923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:44.085 pt1 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.085 "name": "raid_bdev1", 00:11:44.085 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:44.085 "strip_size_kb": 0, 00:11:44.085 "state": "configuring", 00:11:44.085 "raid_level": "raid1", 00:11:44.085 "superblock": true, 00:11:44.085 "num_base_bdevs": 4, 00:11:44.085 "num_base_bdevs_discovered": 1, 00:11:44.085 "num_base_bdevs_operational": 4, 00:11:44.085 "base_bdevs_list": [ 00:11:44.085 { 00:11:44.085 "name": "pt1", 00:11:44.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:44.085 "is_configured": true, 00:11:44.085 "data_offset": 2048, 00:11:44.085 "data_size": 63488 00:11:44.085 }, 00:11:44.085 { 00:11:44.085 "name": null, 00:11:44.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.085 "is_configured": false, 00:11:44.085 "data_offset": 2048, 00:11:44.085 "data_size": 63488 00:11:44.085 }, 00:11:44.085 { 00:11:44.085 "name": null, 00:11:44.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.085 "is_configured": false, 00:11:44.085 "data_offset": 2048, 00:11:44.085 "data_size": 63488 00:11:44.085 }, 00:11:44.085 { 00:11:44.085 "name": null, 00:11:44.085 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:44.085 "is_configured": false, 00:11:44.085 "data_offset": 2048, 00:11:44.085 "data_size": 63488 00:11:44.085 } 00:11:44.085 ] 00:11:44.085 }' 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.085 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.358 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:44.358 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:44.358 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.358 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.617 [2024-12-06 23:45:55.924912] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:44.617 [2024-12-06 23:45:55.925001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.617 [2024-12-06 23:45:55.925026] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:44.617 [2024-12-06 23:45:55.925038] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.617 [2024-12-06 23:45:55.925560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.617 [2024-12-06 23:45:55.925582] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:44.617 [2024-12-06 23:45:55.925693] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:44.617 [2024-12-06 23:45:55.925736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:44.617 pt2 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.617 [2024-12-06 23:45:55.936889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.617 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.617 "name": "raid_bdev1", 00:11:44.617 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:44.617 "strip_size_kb": 0, 00:11:44.617 "state": "configuring", 00:11:44.617 "raid_level": "raid1", 00:11:44.617 "superblock": true, 00:11:44.617 "num_base_bdevs": 4, 00:11:44.617 "num_base_bdevs_discovered": 1, 00:11:44.617 "num_base_bdevs_operational": 4, 00:11:44.617 "base_bdevs_list": [ 00:11:44.617 { 00:11:44.617 "name": "pt1", 00:11:44.617 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:44.617 "is_configured": true, 00:11:44.617 "data_offset": 2048, 00:11:44.617 "data_size": 63488 00:11:44.617 }, 00:11:44.617 { 00:11:44.617 "name": null, 00:11:44.618 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.618 "is_configured": false, 00:11:44.618 "data_offset": 0, 00:11:44.618 "data_size": 63488 00:11:44.618 }, 00:11:44.618 { 00:11:44.618 "name": null, 00:11:44.618 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.618 "is_configured": false, 00:11:44.618 "data_offset": 2048, 00:11:44.618 "data_size": 63488 00:11:44.618 }, 00:11:44.618 { 00:11:44.618 "name": null, 00:11:44.618 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:44.618 "is_configured": false, 00:11:44.618 "data_offset": 2048, 00:11:44.618 "data_size": 63488 00:11:44.618 } 00:11:44.618 ] 00:11:44.618 }' 00:11:44.618 23:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.618 23:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.876 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:44.876 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:44.876 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:44.876 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.876 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.876 [2024-12-06 23:45:56.320234] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:44.876 [2024-12-06 23:45:56.320392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.876 [2024-12-06 23:45:56.320442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:44.876 [2024-12-06 23:45:56.320469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.876 [2024-12-06 23:45:56.321034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.876 [2024-12-06 23:45:56.321092] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:44.876 [2024-12-06 23:45:56.321213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:44.876 [2024-12-06 23:45:56.321264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:44.876 pt2 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.877 [2024-12-06 23:45:56.332173] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:44.877 [2024-12-06 23:45:56.332263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.877 [2024-12-06 23:45:56.332298] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:44.877 [2024-12-06 23:45:56.332336] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.877 [2024-12-06 23:45:56.332779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.877 [2024-12-06 23:45:56.332832] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:44.877 [2024-12-06 23:45:56.332926] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:44.877 [2024-12-06 23:45:56.332969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:44.877 pt3 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.877 [2024-12-06 23:45:56.344118] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:44.877 [2024-12-06 23:45:56.344160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.877 [2024-12-06 23:45:56.344177] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:44.877 [2024-12-06 23:45:56.344185] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.877 [2024-12-06 23:45:56.344573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.877 [2024-12-06 23:45:56.344589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:44.877 [2024-12-06 23:45:56.344646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:44.877 [2024-12-06 23:45:56.344687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:44.877 [2024-12-06 23:45:56.344828] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:44.877 [2024-12-06 23:45:56.344837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:44.877 [2024-12-06 23:45:56.345109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:44.877 [2024-12-06 23:45:56.345273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:44.877 [2024-12-06 23:45:56.345286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:44.877 [2024-12-06 23:45:56.345424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.877 pt4 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.877 "name": "raid_bdev1", 00:11:44.877 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:44.877 "strip_size_kb": 0, 00:11:44.877 "state": "online", 00:11:44.877 "raid_level": "raid1", 00:11:44.877 "superblock": true, 00:11:44.877 "num_base_bdevs": 4, 00:11:44.877 "num_base_bdevs_discovered": 4, 00:11:44.877 "num_base_bdevs_operational": 4, 00:11:44.877 "base_bdevs_list": [ 00:11:44.877 { 00:11:44.877 "name": "pt1", 00:11:44.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:44.877 "is_configured": true, 00:11:44.877 "data_offset": 2048, 00:11:44.877 "data_size": 63488 00:11:44.877 }, 00:11:44.877 { 00:11:44.877 "name": "pt2", 00:11:44.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.877 "is_configured": true, 00:11:44.877 "data_offset": 2048, 00:11:44.877 "data_size": 63488 00:11:44.877 }, 00:11:44.877 { 00:11:44.877 "name": "pt3", 00:11:44.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.877 "is_configured": true, 00:11:44.877 "data_offset": 2048, 00:11:44.877 "data_size": 63488 00:11:44.877 }, 00:11:44.877 { 00:11:44.877 "name": "pt4", 00:11:44.877 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:44.877 "is_configured": true, 00:11:44.877 "data_offset": 2048, 00:11:44.877 "data_size": 63488 00:11:44.877 } 00:11:44.877 ] 00:11:44.877 }' 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.877 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.445 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:45.445 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:45.445 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.445 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.445 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.445 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.445 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.445 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.445 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.445 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.445 [2024-12-06 23:45:56.783820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.445 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.445 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.445 "name": "raid_bdev1", 00:11:45.445 "aliases": [ 00:11:45.445 "0e74584f-0b45-4b39-8eff-3ef0b0e92c29" 00:11:45.445 ], 00:11:45.445 "product_name": "Raid Volume", 00:11:45.445 "block_size": 512, 00:11:45.445 "num_blocks": 63488, 00:11:45.445 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:45.445 "assigned_rate_limits": { 00:11:45.445 "rw_ios_per_sec": 0, 00:11:45.445 "rw_mbytes_per_sec": 0, 00:11:45.445 "r_mbytes_per_sec": 0, 00:11:45.445 "w_mbytes_per_sec": 0 00:11:45.445 }, 00:11:45.445 "claimed": false, 00:11:45.445 "zoned": false, 00:11:45.445 "supported_io_types": { 00:11:45.445 "read": true, 00:11:45.445 "write": true, 00:11:45.445 "unmap": false, 00:11:45.445 "flush": false, 00:11:45.445 "reset": true, 00:11:45.445 "nvme_admin": false, 00:11:45.445 "nvme_io": false, 00:11:45.445 "nvme_io_md": false, 00:11:45.445 "write_zeroes": true, 00:11:45.445 "zcopy": false, 00:11:45.445 "get_zone_info": false, 00:11:45.445 "zone_management": false, 00:11:45.445 "zone_append": false, 00:11:45.445 "compare": false, 00:11:45.445 "compare_and_write": false, 00:11:45.445 "abort": false, 00:11:45.445 "seek_hole": false, 00:11:45.445 "seek_data": false, 00:11:45.445 "copy": false, 00:11:45.445 "nvme_iov_md": false 00:11:45.446 }, 00:11:45.446 "memory_domains": [ 00:11:45.446 { 00:11:45.446 "dma_device_id": "system", 00:11:45.446 "dma_device_type": 1 00:11:45.446 }, 00:11:45.446 { 00:11:45.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.446 "dma_device_type": 2 00:11:45.446 }, 00:11:45.446 { 00:11:45.446 "dma_device_id": "system", 00:11:45.446 "dma_device_type": 1 00:11:45.446 }, 00:11:45.446 { 00:11:45.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.446 "dma_device_type": 2 00:11:45.446 }, 00:11:45.446 { 00:11:45.446 "dma_device_id": "system", 00:11:45.446 "dma_device_type": 1 00:11:45.446 }, 00:11:45.446 { 00:11:45.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.446 "dma_device_type": 2 00:11:45.446 }, 00:11:45.446 { 00:11:45.446 "dma_device_id": "system", 00:11:45.446 "dma_device_type": 1 00:11:45.446 }, 00:11:45.446 { 00:11:45.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.446 "dma_device_type": 2 00:11:45.446 } 00:11:45.446 ], 00:11:45.446 "driver_specific": { 00:11:45.446 "raid": { 00:11:45.446 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:45.446 "strip_size_kb": 0, 00:11:45.446 "state": "online", 00:11:45.446 "raid_level": "raid1", 00:11:45.446 "superblock": true, 00:11:45.446 "num_base_bdevs": 4, 00:11:45.446 "num_base_bdevs_discovered": 4, 00:11:45.446 "num_base_bdevs_operational": 4, 00:11:45.446 "base_bdevs_list": [ 00:11:45.446 { 00:11:45.446 "name": "pt1", 00:11:45.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.446 "is_configured": true, 00:11:45.446 "data_offset": 2048, 00:11:45.446 "data_size": 63488 00:11:45.446 }, 00:11:45.446 { 00:11:45.446 "name": "pt2", 00:11:45.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.446 "is_configured": true, 00:11:45.446 "data_offset": 2048, 00:11:45.446 "data_size": 63488 00:11:45.446 }, 00:11:45.446 { 00:11:45.446 "name": "pt3", 00:11:45.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.446 "is_configured": true, 00:11:45.446 "data_offset": 2048, 00:11:45.446 "data_size": 63488 00:11:45.446 }, 00:11:45.446 { 00:11:45.446 "name": "pt4", 00:11:45.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.446 "is_configured": true, 00:11:45.446 "data_offset": 2048, 00:11:45.446 "data_size": 63488 00:11:45.446 } 00:11:45.446 ] 00:11:45.446 } 00:11:45.446 } 00:11:45.446 }' 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:45.446 pt2 00:11:45.446 pt3 00:11:45.446 pt4' 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.446 23:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.705 [2024-12-06 23:45:57.091246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0e74584f-0b45-4b39-8eff-3ef0b0e92c29 '!=' 0e74584f-0b45-4b39-8eff-3ef0b0e92c29 ']' 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.705 [2024-12-06 23:45:57.134897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.705 "name": "raid_bdev1", 00:11:45.705 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:45.705 "strip_size_kb": 0, 00:11:45.705 "state": "online", 00:11:45.705 "raid_level": "raid1", 00:11:45.705 "superblock": true, 00:11:45.705 "num_base_bdevs": 4, 00:11:45.705 "num_base_bdevs_discovered": 3, 00:11:45.705 "num_base_bdevs_operational": 3, 00:11:45.705 "base_bdevs_list": [ 00:11:45.705 { 00:11:45.705 "name": null, 00:11:45.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.705 "is_configured": false, 00:11:45.705 "data_offset": 0, 00:11:45.705 "data_size": 63488 00:11:45.705 }, 00:11:45.705 { 00:11:45.705 "name": "pt2", 00:11:45.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.705 "is_configured": true, 00:11:45.705 "data_offset": 2048, 00:11:45.705 "data_size": 63488 00:11:45.705 }, 00:11:45.705 { 00:11:45.705 "name": "pt3", 00:11:45.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.705 "is_configured": true, 00:11:45.705 "data_offset": 2048, 00:11:45.705 "data_size": 63488 00:11:45.705 }, 00:11:45.705 { 00:11:45.705 "name": "pt4", 00:11:45.705 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.705 "is_configured": true, 00:11:45.705 "data_offset": 2048, 00:11:45.705 "data_size": 63488 00:11:45.705 } 00:11:45.705 ] 00:11:45.705 }' 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.705 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.271 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.271 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.271 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.271 [2024-12-06 23:45:57.582129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.271 [2024-12-06 23:45:57.582240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.271 [2024-12-06 23:45:57.582351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.271 [2024-12-06 23:45:57.582452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.271 [2024-12-06 23:45:57.582504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:46.271 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.271 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.271 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.271 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.271 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:46.271 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.272 [2024-12-06 23:45:57.673947] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.272 [2024-12-06 23:45:57.674015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.272 [2024-12-06 23:45:57.674036] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:46.272 [2024-12-06 23:45:57.674045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.272 [2024-12-06 23:45:57.676672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.272 [2024-12-06 23:45:57.676707] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.272 [2024-12-06 23:45:57.676798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.272 [2024-12-06 23:45:57.676850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.272 pt2 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.272 "name": "raid_bdev1", 00:11:46.272 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:46.272 "strip_size_kb": 0, 00:11:46.272 "state": "configuring", 00:11:46.272 "raid_level": "raid1", 00:11:46.272 "superblock": true, 00:11:46.272 "num_base_bdevs": 4, 00:11:46.272 "num_base_bdevs_discovered": 1, 00:11:46.272 "num_base_bdevs_operational": 3, 00:11:46.272 "base_bdevs_list": [ 00:11:46.272 { 00:11:46.272 "name": null, 00:11:46.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.272 "is_configured": false, 00:11:46.272 "data_offset": 2048, 00:11:46.272 "data_size": 63488 00:11:46.272 }, 00:11:46.272 { 00:11:46.272 "name": "pt2", 00:11:46.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.272 "is_configured": true, 00:11:46.272 "data_offset": 2048, 00:11:46.272 "data_size": 63488 00:11:46.272 }, 00:11:46.272 { 00:11:46.272 "name": null, 00:11:46.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.272 "is_configured": false, 00:11:46.272 "data_offset": 2048, 00:11:46.272 "data_size": 63488 00:11:46.272 }, 00:11:46.272 { 00:11:46.272 "name": null, 00:11:46.272 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.272 "is_configured": false, 00:11:46.272 "data_offset": 2048, 00:11:46.272 "data_size": 63488 00:11:46.272 } 00:11:46.272 ] 00:11:46.272 }' 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.272 23:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.840 [2024-12-06 23:45:58.141161] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:46.840 [2024-12-06 23:45:58.141274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.840 [2024-12-06 23:45:58.141315] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:46.840 [2024-12-06 23:45:58.141352] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.840 [2024-12-06 23:45:58.141854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.840 [2024-12-06 23:45:58.141913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:46.840 [2024-12-06 23:45:58.142024] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:46.840 [2024-12-06 23:45:58.142073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:46.840 pt3 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.840 "name": "raid_bdev1", 00:11:46.840 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:46.840 "strip_size_kb": 0, 00:11:46.840 "state": "configuring", 00:11:46.840 "raid_level": "raid1", 00:11:46.840 "superblock": true, 00:11:46.840 "num_base_bdevs": 4, 00:11:46.840 "num_base_bdevs_discovered": 2, 00:11:46.840 "num_base_bdevs_operational": 3, 00:11:46.840 "base_bdevs_list": [ 00:11:46.840 { 00:11:46.840 "name": null, 00:11:46.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.840 "is_configured": false, 00:11:46.840 "data_offset": 2048, 00:11:46.840 "data_size": 63488 00:11:46.840 }, 00:11:46.840 { 00:11:46.840 "name": "pt2", 00:11:46.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.840 "is_configured": true, 00:11:46.840 "data_offset": 2048, 00:11:46.840 "data_size": 63488 00:11:46.840 }, 00:11:46.840 { 00:11:46.840 "name": "pt3", 00:11:46.840 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.840 "is_configured": true, 00:11:46.840 "data_offset": 2048, 00:11:46.840 "data_size": 63488 00:11:46.840 }, 00:11:46.840 { 00:11:46.840 "name": null, 00:11:46.840 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.840 "is_configured": false, 00:11:46.840 "data_offset": 2048, 00:11:46.840 "data_size": 63488 00:11:46.840 } 00:11:46.840 ] 00:11:46.840 }' 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.840 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.099 [2024-12-06 23:45:58.572486] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:47.099 [2024-12-06 23:45:58.572576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.099 [2024-12-06 23:45:58.572610] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:47.099 [2024-12-06 23:45:58.572621] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.099 [2024-12-06 23:45:58.573167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.099 [2024-12-06 23:45:58.573199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:47.099 [2024-12-06 23:45:58.573304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:47.099 [2024-12-06 23:45:58.573331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:47.099 [2024-12-06 23:45:58.573478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:47.099 [2024-12-06 23:45:58.573493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.099 [2024-12-06 23:45:58.573782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:47.099 [2024-12-06 23:45:58.573962] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:47.099 [2024-12-06 23:45:58.573976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:47.099 [2024-12-06 23:45:58.574126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.099 pt4 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.099 "name": "raid_bdev1", 00:11:47.099 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:47.099 "strip_size_kb": 0, 00:11:47.099 "state": "online", 00:11:47.099 "raid_level": "raid1", 00:11:47.099 "superblock": true, 00:11:47.099 "num_base_bdevs": 4, 00:11:47.099 "num_base_bdevs_discovered": 3, 00:11:47.099 "num_base_bdevs_operational": 3, 00:11:47.099 "base_bdevs_list": [ 00:11:47.099 { 00:11:47.099 "name": null, 00:11:47.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.099 "is_configured": false, 00:11:47.099 "data_offset": 2048, 00:11:47.099 "data_size": 63488 00:11:47.099 }, 00:11:47.099 { 00:11:47.099 "name": "pt2", 00:11:47.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.099 "is_configured": true, 00:11:47.099 "data_offset": 2048, 00:11:47.099 "data_size": 63488 00:11:47.099 }, 00:11:47.099 { 00:11:47.099 "name": "pt3", 00:11:47.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.099 "is_configured": true, 00:11:47.099 "data_offset": 2048, 00:11:47.099 "data_size": 63488 00:11:47.099 }, 00:11:47.099 { 00:11:47.099 "name": "pt4", 00:11:47.099 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.099 "is_configured": true, 00:11:47.099 "data_offset": 2048, 00:11:47.099 "data_size": 63488 00:11:47.099 } 00:11:47.099 ] 00:11:47.099 }' 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.099 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.667 23:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.667 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.667 23:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.667 [2024-12-06 23:45:59.003693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.667 [2024-12-06 23:45:59.003801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.667 [2024-12-06 23:45:59.003915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.667 [2024-12-06 23:45:59.004011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.667 [2024-12-06 23:45:59.004048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.667 [2024-12-06 23:45:59.075533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.667 [2024-12-06 23:45:59.075600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.667 [2024-12-06 23:45:59.075619] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:47.667 [2024-12-06 23:45:59.075632] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.667 [2024-12-06 23:45:59.078159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.667 [2024-12-06 23:45:59.078204] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.667 [2024-12-06 23:45:59.078288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:47.667 [2024-12-06 23:45:59.078338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.667 [2024-12-06 23:45:59.078482] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:47.667 [2024-12-06 23:45:59.078502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.667 [2024-12-06 23:45:59.078519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:47.667 [2024-12-06 23:45:59.078582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.667 [2024-12-06 23:45:59.078704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.667 pt1 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.667 "name": "raid_bdev1", 00:11:47.667 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:47.667 "strip_size_kb": 0, 00:11:47.667 "state": "configuring", 00:11:47.667 "raid_level": "raid1", 00:11:47.667 "superblock": true, 00:11:47.667 "num_base_bdevs": 4, 00:11:47.667 "num_base_bdevs_discovered": 2, 00:11:47.667 "num_base_bdevs_operational": 3, 00:11:47.667 "base_bdevs_list": [ 00:11:47.667 { 00:11:47.667 "name": null, 00:11:47.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.667 "is_configured": false, 00:11:47.667 "data_offset": 2048, 00:11:47.667 "data_size": 63488 00:11:47.667 }, 00:11:47.667 { 00:11:47.667 "name": "pt2", 00:11:47.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.667 "is_configured": true, 00:11:47.667 "data_offset": 2048, 00:11:47.667 "data_size": 63488 00:11:47.667 }, 00:11:47.667 { 00:11:47.667 "name": "pt3", 00:11:47.667 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.667 "is_configured": true, 00:11:47.667 "data_offset": 2048, 00:11:47.667 "data_size": 63488 00:11:47.667 }, 00:11:47.667 { 00:11:47.667 "name": null, 00:11:47.667 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.667 "is_configured": false, 00:11:47.667 "data_offset": 2048, 00:11:47.667 "data_size": 63488 00:11:47.667 } 00:11:47.667 ] 00:11:47.667 }' 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.667 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.233 [2024-12-06 23:45:59.606761] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:48.233 [2024-12-06 23:45:59.606887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.233 [2024-12-06 23:45:59.606930] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:48.233 [2024-12-06 23:45:59.606959] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.233 [2024-12-06 23:45:59.607494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.233 [2024-12-06 23:45:59.607555] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:48.233 [2024-12-06 23:45:59.607689] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:48.233 [2024-12-06 23:45:59.607747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:48.233 [2024-12-06 23:45:59.607917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:48.233 [2024-12-06 23:45:59.607954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:48.233 [2024-12-06 23:45:59.608248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:48.233 [2024-12-06 23:45:59.608435] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:48.233 [2024-12-06 23:45:59.608475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:48.233 [2024-12-06 23:45:59.608677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.233 pt4 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.233 "name": "raid_bdev1", 00:11:48.233 "uuid": "0e74584f-0b45-4b39-8eff-3ef0b0e92c29", 00:11:48.233 "strip_size_kb": 0, 00:11:48.233 "state": "online", 00:11:48.233 "raid_level": "raid1", 00:11:48.233 "superblock": true, 00:11:48.233 "num_base_bdevs": 4, 00:11:48.233 "num_base_bdevs_discovered": 3, 00:11:48.233 "num_base_bdevs_operational": 3, 00:11:48.233 "base_bdevs_list": [ 00:11:48.233 { 00:11:48.233 "name": null, 00:11:48.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.233 "is_configured": false, 00:11:48.233 "data_offset": 2048, 00:11:48.233 "data_size": 63488 00:11:48.233 }, 00:11:48.233 { 00:11:48.233 "name": "pt2", 00:11:48.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.233 "is_configured": true, 00:11:48.233 "data_offset": 2048, 00:11:48.233 "data_size": 63488 00:11:48.233 }, 00:11:48.233 { 00:11:48.233 "name": "pt3", 00:11:48.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.233 "is_configured": true, 00:11:48.233 "data_offset": 2048, 00:11:48.233 "data_size": 63488 00:11:48.233 }, 00:11:48.233 { 00:11:48.233 "name": "pt4", 00:11:48.233 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.233 "is_configured": true, 00:11:48.233 "data_offset": 2048, 00:11:48.233 "data_size": 63488 00:11:48.233 } 00:11:48.233 ] 00:11:48.233 }' 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.233 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.492 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:48.492 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.492 23:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.492 23:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:48.492 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.492 23:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:48.492 23:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.492 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.492 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.492 23:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:48.492 [2024-12-06 23:46:00.034329] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.492 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.751 23:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0e74584f-0b45-4b39-8eff-3ef0b0e92c29 '!=' 0e74584f-0b45-4b39-8eff-3ef0b0e92c29 ']' 00:11:48.751 23:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74449 00:11:48.751 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74449 ']' 00:11:48.751 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74449 00:11:48.751 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:48.751 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.751 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74449 00:11:48.751 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.751 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.751 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74449' 00:11:48.751 killing process with pid 74449 00:11:48.751 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74449 00:11:48.751 23:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74449 00:11:48.751 [2024-12-06 23:46:00.124961] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.751 [2024-12-06 23:46:00.125077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.751 [2024-12-06 23:46:00.125233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.751 [2024-12-06 23:46:00.125252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:49.010 [2024-12-06 23:46:00.554501] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.389 ************************************ 00:11:50.389 END TEST raid_superblock_test 00:11:50.389 ************************************ 00:11:50.389 23:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:50.389 00:11:50.389 real 0m8.597s 00:11:50.389 user 0m13.258s 00:11:50.389 sys 0m1.645s 00:11:50.389 23:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.389 23:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.389 23:46:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:50.389 23:46:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.389 23:46:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.389 23:46:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.389 ************************************ 00:11:50.389 START TEST raid_read_error_test 00:11:50.389 ************************************ 00:11:50.389 23:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:50.389 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:50.389 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:50.389 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:50.389 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:50.389 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.389 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SC7L1Nvn94 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74938 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74938 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74938 ']' 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.390 23:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.649 [2024-12-06 23:46:01.950854] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:11:50.649 [2024-12-06 23:46:01.951041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74938 ] 00:11:50.649 [2024-12-06 23:46:02.125254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.909 [2024-12-06 23:46:02.256721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.168 [2024-12-06 23:46:02.491996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.168 [2024-12-06 23:46:02.492170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.428 BaseBdev1_malloc 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.428 true 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.428 [2024-12-06 23:46:02.842259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:51.428 [2024-12-06 23:46:02.842413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.428 [2024-12-06 23:46:02.842439] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:51.428 [2024-12-06 23:46:02.842451] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.428 [2024-12-06 23:46:02.844985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.428 [2024-12-06 23:46:02.845028] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.428 BaseBdev1 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.428 23:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.429 BaseBdev2_malloc 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.429 true 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.429 [2024-12-06 23:46:02.915317] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:51.429 [2024-12-06 23:46:02.915453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.429 [2024-12-06 23:46:02.915473] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:51.429 [2024-12-06 23:46:02.915485] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.429 [2024-12-06 23:46:02.917801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.429 [2024-12-06 23:46:02.917838] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.429 BaseBdev2 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.429 BaseBdev3_malloc 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.429 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.688 true 00:11:51.688 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.688 23:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:51.688 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.688 23:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.688 [2024-12-06 23:46:02.999922] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:51.688 [2024-12-06 23:46:02.999996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.689 [2024-12-06 23:46:03.000013] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:51.689 [2024-12-06 23:46:03.000025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.689 [2024-12-06 23:46:03.002517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.689 [2024-12-06 23:46:03.002561] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:51.689 BaseBdev3 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.689 BaseBdev4_malloc 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.689 true 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.689 [2024-12-06 23:46:03.074158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:51.689 [2024-12-06 23:46:03.074223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.689 [2024-12-06 23:46:03.074241] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:51.689 [2024-12-06 23:46:03.074252] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.689 [2024-12-06 23:46:03.076641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.689 [2024-12-06 23:46:03.076691] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:51.689 BaseBdev4 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.689 [2024-12-06 23:46:03.086211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.689 [2024-12-06 23:46:03.088398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.689 [2024-12-06 23:46:03.088472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.689 [2024-12-06 23:46:03.088531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:51.689 [2024-12-06 23:46:03.088784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:51.689 [2024-12-06 23:46:03.088799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.689 [2024-12-06 23:46:03.089056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:51.689 [2024-12-06 23:46:03.089236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:51.689 [2024-12-06 23:46:03.089255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:51.689 [2024-12-06 23:46:03.089412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.689 "name": "raid_bdev1", 00:11:51.689 "uuid": "cbe9434d-da38-4f87-9b8e-b5dece4749f3", 00:11:51.689 "strip_size_kb": 0, 00:11:51.689 "state": "online", 00:11:51.689 "raid_level": "raid1", 00:11:51.689 "superblock": true, 00:11:51.689 "num_base_bdevs": 4, 00:11:51.689 "num_base_bdevs_discovered": 4, 00:11:51.689 "num_base_bdevs_operational": 4, 00:11:51.689 "base_bdevs_list": [ 00:11:51.689 { 00:11:51.689 "name": "BaseBdev1", 00:11:51.689 "uuid": "e974a38a-43cb-56ae-979e-92bdda8f8031", 00:11:51.689 "is_configured": true, 00:11:51.689 "data_offset": 2048, 00:11:51.689 "data_size": 63488 00:11:51.689 }, 00:11:51.689 { 00:11:51.689 "name": "BaseBdev2", 00:11:51.689 "uuid": "66b0f939-43e6-5a65-ad79-89be75f7ab0a", 00:11:51.689 "is_configured": true, 00:11:51.689 "data_offset": 2048, 00:11:51.689 "data_size": 63488 00:11:51.689 }, 00:11:51.689 { 00:11:51.689 "name": "BaseBdev3", 00:11:51.689 "uuid": "ae06816f-768a-5c8b-9429-2e9bd9ffbe39", 00:11:51.689 "is_configured": true, 00:11:51.689 "data_offset": 2048, 00:11:51.689 "data_size": 63488 00:11:51.689 }, 00:11:51.689 { 00:11:51.689 "name": "BaseBdev4", 00:11:51.689 "uuid": "368176f4-efa6-5e00-9c29-c2fb07f634ae", 00:11:51.689 "is_configured": true, 00:11:51.689 "data_offset": 2048, 00:11:51.689 "data_size": 63488 00:11:51.689 } 00:11:51.689 ] 00:11:51.689 }' 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.689 23:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.258 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:52.258 23:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:52.258 [2024-12-06 23:46:03.630521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.197 "name": "raid_bdev1", 00:11:53.197 "uuid": "cbe9434d-da38-4f87-9b8e-b5dece4749f3", 00:11:53.197 "strip_size_kb": 0, 00:11:53.197 "state": "online", 00:11:53.197 "raid_level": "raid1", 00:11:53.197 "superblock": true, 00:11:53.197 "num_base_bdevs": 4, 00:11:53.197 "num_base_bdevs_discovered": 4, 00:11:53.197 "num_base_bdevs_operational": 4, 00:11:53.197 "base_bdevs_list": [ 00:11:53.197 { 00:11:53.197 "name": "BaseBdev1", 00:11:53.197 "uuid": "e974a38a-43cb-56ae-979e-92bdda8f8031", 00:11:53.197 "is_configured": true, 00:11:53.197 "data_offset": 2048, 00:11:53.197 "data_size": 63488 00:11:53.197 }, 00:11:53.197 { 00:11:53.197 "name": "BaseBdev2", 00:11:53.197 "uuid": "66b0f939-43e6-5a65-ad79-89be75f7ab0a", 00:11:53.197 "is_configured": true, 00:11:53.197 "data_offset": 2048, 00:11:53.197 "data_size": 63488 00:11:53.197 }, 00:11:53.197 { 00:11:53.197 "name": "BaseBdev3", 00:11:53.197 "uuid": "ae06816f-768a-5c8b-9429-2e9bd9ffbe39", 00:11:53.197 "is_configured": true, 00:11:53.197 "data_offset": 2048, 00:11:53.197 "data_size": 63488 00:11:53.197 }, 00:11:53.197 { 00:11:53.197 "name": "BaseBdev4", 00:11:53.197 "uuid": "368176f4-efa6-5e00-9c29-c2fb07f634ae", 00:11:53.197 "is_configured": true, 00:11:53.197 "data_offset": 2048, 00:11:53.197 "data_size": 63488 00:11:53.197 } 00:11:53.197 ] 00:11:53.197 }' 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.197 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.457 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.457 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.457 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.457 [2024-12-06 23:46:04.984712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.457 [2024-12-06 23:46:04.984855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.457 [2024-12-06 23:46:04.987839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.457 [2024-12-06 23:46:04.987965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.457 [2024-12-06 23:46:04.988121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.457 [2024-12-06 23:46:04.988173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:53.457 { 00:11:53.457 "results": [ 00:11:53.457 { 00:11:53.457 "job": "raid_bdev1", 00:11:53.457 "core_mask": "0x1", 00:11:53.457 "workload": "randrw", 00:11:53.457 "percentage": 50, 00:11:53.457 "status": "finished", 00:11:53.457 "queue_depth": 1, 00:11:53.457 "io_size": 131072, 00:11:53.457 "runtime": 1.355148, 00:11:53.457 "iops": 7916.478495337778, 00:11:53.457 "mibps": 989.5598119172223, 00:11:53.457 "io_failed": 0, 00:11:53.457 "io_timeout": 0, 00:11:53.457 "avg_latency_us": 123.71012703157716, 00:11:53.457 "min_latency_us": 23.36419213973799, 00:11:53.457 "max_latency_us": 1581.1633187772925 00:11:53.457 } 00:11:53.457 ], 00:11:53.457 "core_count": 1 00:11:53.457 } 00:11:53.457 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.457 23:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74938 00:11:53.457 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74938 ']' 00:11:53.457 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74938 00:11:53.457 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:53.457 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.457 23:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74938 00:11:53.721 23:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.721 23:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.721 23:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74938' 00:11:53.721 killing process with pid 74938 00:11:53.721 23:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74938 00:11:53.721 [2024-12-06 23:46:05.032568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.721 23:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74938 00:11:53.988 [2024-12-06 23:46:05.391861] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.392 23:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SC7L1Nvn94 00:11:55.392 23:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:55.392 23:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:55.392 23:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:55.392 23:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:55.392 23:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:55.392 23:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:55.392 23:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:55.392 00:11:55.392 real 0m4.845s 00:11:55.392 user 0m5.528s 00:11:55.392 sys 0m0.726s 00:11:55.392 ************************************ 00:11:55.392 END TEST raid_read_error_test 00:11:55.392 ************************************ 00:11:55.392 23:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.392 23:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.392 23:46:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:55.392 23:46:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:55.392 23:46:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.392 23:46:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.392 ************************************ 00:11:55.392 START TEST raid_write_error_test 00:11:55.392 ************************************ 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nbBpZOyfFs 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75084 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75084 00:11:55.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75084 ']' 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.392 23:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.393 23:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.393 23:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.393 23:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.393 [2024-12-06 23:46:06.867364] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:11:55.393 [2024-12-06 23:46:06.867474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75084 ] 00:11:55.653 [2024-12-06 23:46:07.022947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.653 [2024-12-06 23:46:07.154402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.913 [2024-12-06 23:46:07.386843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.913 [2024-12-06 23:46:07.386891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.173 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.173 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.173 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.173 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:56.173 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.173 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.434 BaseBdev1_malloc 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.434 true 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.434 [2024-12-06 23:46:07.762435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:56.434 [2024-12-06 23:46:07.762525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.434 [2024-12-06 23:46:07.762547] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:56.434 [2024-12-06 23:46:07.762559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.434 [2024-12-06 23:46:07.765034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.434 [2024-12-06 23:46:07.765081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:56.434 BaseBdev1 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.434 BaseBdev2_malloc 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.434 true 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.434 [2024-12-06 23:46:07.835861] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:56.434 [2024-12-06 23:46:07.835918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.434 [2024-12-06 23:46:07.835935] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:56.434 [2024-12-06 23:46:07.835946] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.434 [2024-12-06 23:46:07.838219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.434 [2024-12-06 23:46:07.838256] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:56.434 BaseBdev2 00:11:56.434 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.435 BaseBdev3_malloc 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.435 true 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.435 [2024-12-06 23:46:07.919305] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:56.435 [2024-12-06 23:46:07.919355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.435 [2024-12-06 23:46:07.919372] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:56.435 [2024-12-06 23:46:07.919383] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.435 [2024-12-06 23:46:07.921722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.435 [2024-12-06 23:46:07.921757] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:56.435 BaseBdev3 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.435 BaseBdev4_malloc 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.435 true 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.435 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.435 [2024-12-06 23:46:07.993344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:56.435 [2024-12-06 23:46:07.993401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.435 [2024-12-06 23:46:07.993420] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:56.435 [2024-12-06 23:46:07.993431] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.695 [2024-12-06 23:46:07.995894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.695 [2024-12-06 23:46:07.996024] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:56.695 BaseBdev4 00:11:56.695 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.695 23:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:56.695 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.695 23:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.695 [2024-12-06 23:46:08.005393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.695 [2024-12-06 23:46:08.007444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.695 [2024-12-06 23:46:08.007572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.695 [2024-12-06 23:46:08.007638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:56.695 [2024-12-06 23:46:08.007891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:56.695 [2024-12-06 23:46:08.007907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:56.695 [2024-12-06 23:46:08.008149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:56.695 [2024-12-06 23:46:08.008338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:56.695 [2024-12-06 23:46:08.008347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:56.695 [2024-12-06 23:46:08.008498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.695 "name": "raid_bdev1", 00:11:56.695 "uuid": "087c5f35-a9b6-499c-bb31-02ee3c0c678e", 00:11:56.695 "strip_size_kb": 0, 00:11:56.695 "state": "online", 00:11:56.695 "raid_level": "raid1", 00:11:56.695 "superblock": true, 00:11:56.695 "num_base_bdevs": 4, 00:11:56.695 "num_base_bdevs_discovered": 4, 00:11:56.695 "num_base_bdevs_operational": 4, 00:11:56.695 "base_bdevs_list": [ 00:11:56.695 { 00:11:56.695 "name": "BaseBdev1", 00:11:56.695 "uuid": "bf8fb3de-eb69-517a-be26-a7984d86058f", 00:11:56.695 "is_configured": true, 00:11:56.695 "data_offset": 2048, 00:11:56.695 "data_size": 63488 00:11:56.695 }, 00:11:56.695 { 00:11:56.695 "name": "BaseBdev2", 00:11:56.695 "uuid": "65d15f06-d8b3-5b5c-8ea2-6d03b33e13a0", 00:11:56.695 "is_configured": true, 00:11:56.695 "data_offset": 2048, 00:11:56.695 "data_size": 63488 00:11:56.695 }, 00:11:56.695 { 00:11:56.695 "name": "BaseBdev3", 00:11:56.695 "uuid": "6c44725e-38d8-55e8-91ed-440cb6662e42", 00:11:56.695 "is_configured": true, 00:11:56.695 "data_offset": 2048, 00:11:56.695 "data_size": 63488 00:11:56.695 }, 00:11:56.695 { 00:11:56.695 "name": "BaseBdev4", 00:11:56.695 "uuid": "e21934e5-c286-5a04-9b43-d17a1ac10229", 00:11:56.695 "is_configured": true, 00:11:56.695 "data_offset": 2048, 00:11:56.695 "data_size": 63488 00:11:56.695 } 00:11:56.695 ] 00:11:56.695 }' 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.695 23:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.955 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:56.955 23:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:57.216 [2024-12-06 23:46:08.545726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.156 [2024-12-06 23:46:09.463456] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:58.156 [2024-12-06 23:46:09.463669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.156 [2024-12-06 23:46:09.463985] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.156 "name": "raid_bdev1", 00:11:58.156 "uuid": "087c5f35-a9b6-499c-bb31-02ee3c0c678e", 00:11:58.156 "strip_size_kb": 0, 00:11:58.156 "state": "online", 00:11:58.156 "raid_level": "raid1", 00:11:58.156 "superblock": true, 00:11:58.156 "num_base_bdevs": 4, 00:11:58.156 "num_base_bdevs_discovered": 3, 00:11:58.156 "num_base_bdevs_operational": 3, 00:11:58.156 "base_bdevs_list": [ 00:11:58.156 { 00:11:58.156 "name": null, 00:11:58.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.156 "is_configured": false, 00:11:58.156 "data_offset": 0, 00:11:58.156 "data_size": 63488 00:11:58.156 }, 00:11:58.156 { 00:11:58.156 "name": "BaseBdev2", 00:11:58.156 "uuid": "65d15f06-d8b3-5b5c-8ea2-6d03b33e13a0", 00:11:58.156 "is_configured": true, 00:11:58.156 "data_offset": 2048, 00:11:58.156 "data_size": 63488 00:11:58.156 }, 00:11:58.156 { 00:11:58.156 "name": "BaseBdev3", 00:11:58.156 "uuid": "6c44725e-38d8-55e8-91ed-440cb6662e42", 00:11:58.156 "is_configured": true, 00:11:58.156 "data_offset": 2048, 00:11:58.156 "data_size": 63488 00:11:58.156 }, 00:11:58.156 { 00:11:58.156 "name": "BaseBdev4", 00:11:58.156 "uuid": "e21934e5-c286-5a04-9b43-d17a1ac10229", 00:11:58.156 "is_configured": true, 00:11:58.156 "data_offset": 2048, 00:11:58.156 "data_size": 63488 00:11:58.156 } 00:11:58.156 ] 00:11:58.156 }' 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.156 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.417 [2024-12-06 23:46:09.864588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.417 [2024-12-06 23:46:09.864754] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.417 [2024-12-06 23:46:09.867658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.417 [2024-12-06 23:46:09.867718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.417 [2024-12-06 23:46:09.867830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.417 [2024-12-06 23:46:09.867845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:58.417 { 00:11:58.417 "results": [ 00:11:58.417 { 00:11:58.417 "job": "raid_bdev1", 00:11:58.417 "core_mask": "0x1", 00:11:58.417 "workload": "randrw", 00:11:58.417 "percentage": 50, 00:11:58.417 "status": "finished", 00:11:58.417 "queue_depth": 1, 00:11:58.417 "io_size": 131072, 00:11:58.417 "runtime": 1.319619, 00:11:58.417 "iops": 8673.715670962603, 00:11:58.417 "mibps": 1084.2144588703254, 00:11:58.417 "io_failed": 0, 00:11:58.417 "io_timeout": 0, 00:11:58.417 "avg_latency_us": 112.68696739655431, 00:11:58.417 "min_latency_us": 23.14061135371179, 00:11:58.417 "max_latency_us": 1509.6174672489083 00:11:58.417 } 00:11:58.417 ], 00:11:58.417 "core_count": 1 00:11:58.417 } 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75084 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75084 ']' 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75084 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75084 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.417 killing process with pid 75084 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75084' 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75084 00:11:58.417 [2024-12-06 23:46:09.915501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.417 23:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75084 00:11:58.986 [2024-12-06 23:46:10.276159] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.369 23:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nbBpZOyfFs 00:12:00.369 23:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:00.369 23:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:00.369 23:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:00.369 23:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:00.369 23:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.369 23:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:00.369 23:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:00.369 00:12:00.369 real 0m4.813s 00:12:00.369 user 0m5.501s 00:12:00.369 sys 0m0.676s 00:12:00.369 23:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.369 ************************************ 00:12:00.369 END TEST raid_write_error_test 00:12:00.369 ************************************ 00:12:00.369 23:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.369 23:46:11 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:00.369 23:46:11 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:00.369 23:46:11 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:00.369 23:46:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:00.369 23:46:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.369 23:46:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.369 ************************************ 00:12:00.369 START TEST raid_rebuild_test 00:12:00.369 ************************************ 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75228 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75228 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75228 ']' 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.369 23:46:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.370 23:46:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.370 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:00.370 Zero copy mechanism will not be used. 00:12:00.370 [2024-12-06 23:46:11.743142] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:12:00.370 [2024-12-06 23:46:11.743256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75228 ] 00:12:00.370 [2024-12-06 23:46:11.918500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.629 [2024-12-06 23:46:12.057506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.889 [2024-12-06 23:46:12.285722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.889 [2024-12-06 23:46:12.285848] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.150 BaseBdev1_malloc 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.150 [2024-12-06 23:46:12.620754] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:01.150 [2024-12-06 23:46:12.620832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.150 [2024-12-06 23:46:12.620858] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:01.150 [2024-12-06 23:46:12.620871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.150 [2024-12-06 23:46:12.623331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.150 [2024-12-06 23:46:12.623375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:01.150 BaseBdev1 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.150 BaseBdev2_malloc 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.150 [2024-12-06 23:46:12.683771] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:01.150 [2024-12-06 23:46:12.683930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.150 [2024-12-06 23:46:12.683977] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:01.150 [2024-12-06 23:46:12.684010] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.150 [2024-12-06 23:46:12.686324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.150 [2024-12-06 23:46:12.686397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:01.150 BaseBdev2 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.150 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.410 spare_malloc 00:12:01.410 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.410 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:01.410 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.410 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.410 spare_delay 00:12:01.410 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.410 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:01.410 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.410 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.410 [2024-12-06 23:46:12.770782] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:01.410 [2024-12-06 23:46:12.770844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.410 [2024-12-06 23:46:12.770865] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:01.410 [2024-12-06 23:46:12.770876] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.410 [2024-12-06 23:46:12.773241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.410 [2024-12-06 23:46:12.773279] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:01.410 spare 00:12:01.410 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.410 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:01.410 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.410 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.410 [2024-12-06 23:46:12.782823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.410 [2024-12-06 23:46:12.784931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.410 [2024-12-06 23:46:12.785020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:01.410 [2024-12-06 23:46:12.785034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:01.410 [2024-12-06 23:46:12.785278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:01.410 [2024-12-06 23:46:12.785446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:01.411 [2024-12-06 23:46:12.785458] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:01.411 [2024-12-06 23:46:12.785610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.411 "name": "raid_bdev1", 00:12:01.411 "uuid": "5e0ac4e3-9ef1-46d8-8148-ef92914f08d0", 00:12:01.411 "strip_size_kb": 0, 00:12:01.411 "state": "online", 00:12:01.411 "raid_level": "raid1", 00:12:01.411 "superblock": false, 00:12:01.411 "num_base_bdevs": 2, 00:12:01.411 "num_base_bdevs_discovered": 2, 00:12:01.411 "num_base_bdevs_operational": 2, 00:12:01.411 "base_bdevs_list": [ 00:12:01.411 { 00:12:01.411 "name": "BaseBdev1", 00:12:01.411 "uuid": "cd308cfc-803f-5c67-b987-ce069f181995", 00:12:01.411 "is_configured": true, 00:12:01.411 "data_offset": 0, 00:12:01.411 "data_size": 65536 00:12:01.411 }, 00:12:01.411 { 00:12:01.411 "name": "BaseBdev2", 00:12:01.411 "uuid": "efd89226-f3db-5c5c-a153-b4ce120099ae", 00:12:01.411 "is_configured": true, 00:12:01.411 "data_offset": 0, 00:12:01.411 "data_size": 65536 00:12:01.411 } 00:12:01.411 ] 00:12:01.411 }' 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.411 23:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.671 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:01.671 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.671 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.671 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:01.671 [2024-12-06 23:46:13.226319] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:01.931 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:01.931 [2024-12-06 23:46:13.485794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:02.191 /dev/nbd0 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:02.191 1+0 records in 00:12:02.191 1+0 records out 00:12:02.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413766 s, 9.9 MB/s 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:02.191 23:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:07.463 65536+0 records in 00:12:07.463 65536+0 records out 00:12:07.463 33554432 bytes (34 MB, 32 MiB) copied, 4.58534 s, 7.3 MB/s 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:07.463 [2024-12-06 23:46:18.337088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.463 [2024-12-06 23:46:18.378590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.463 "name": "raid_bdev1", 00:12:07.463 "uuid": "5e0ac4e3-9ef1-46d8-8148-ef92914f08d0", 00:12:07.463 "strip_size_kb": 0, 00:12:07.463 "state": "online", 00:12:07.463 "raid_level": "raid1", 00:12:07.463 "superblock": false, 00:12:07.463 "num_base_bdevs": 2, 00:12:07.463 "num_base_bdevs_discovered": 1, 00:12:07.463 "num_base_bdevs_operational": 1, 00:12:07.463 "base_bdevs_list": [ 00:12:07.463 { 00:12:07.463 "name": null, 00:12:07.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.463 "is_configured": false, 00:12:07.463 "data_offset": 0, 00:12:07.463 "data_size": 65536 00:12:07.463 }, 00:12:07.463 { 00:12:07.463 "name": "BaseBdev2", 00:12:07.463 "uuid": "efd89226-f3db-5c5c-a153-b4ce120099ae", 00:12:07.463 "is_configured": true, 00:12:07.463 "data_offset": 0, 00:12:07.463 "data_size": 65536 00:12:07.463 } 00:12:07.463 ] 00:12:07.463 }' 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.463 [2024-12-06 23:46:18.809924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:07.463 [2024-12-06 23:46:18.828429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.463 23:46:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:07.463 [2024-12-06 23:46:18.830667] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.398 "name": "raid_bdev1", 00:12:08.398 "uuid": "5e0ac4e3-9ef1-46d8-8148-ef92914f08d0", 00:12:08.398 "strip_size_kb": 0, 00:12:08.398 "state": "online", 00:12:08.398 "raid_level": "raid1", 00:12:08.398 "superblock": false, 00:12:08.398 "num_base_bdevs": 2, 00:12:08.398 "num_base_bdevs_discovered": 2, 00:12:08.398 "num_base_bdevs_operational": 2, 00:12:08.398 "process": { 00:12:08.398 "type": "rebuild", 00:12:08.398 "target": "spare", 00:12:08.398 "progress": { 00:12:08.398 "blocks": 20480, 00:12:08.398 "percent": 31 00:12:08.398 } 00:12:08.398 }, 00:12:08.398 "base_bdevs_list": [ 00:12:08.398 { 00:12:08.398 "name": "spare", 00:12:08.398 "uuid": "9199f891-62a6-5ad4-adbe-93aa59c99d41", 00:12:08.398 "is_configured": true, 00:12:08.398 "data_offset": 0, 00:12:08.398 "data_size": 65536 00:12:08.398 }, 00:12:08.398 { 00:12:08.398 "name": "BaseBdev2", 00:12:08.398 "uuid": "efd89226-f3db-5c5c-a153-b4ce120099ae", 00:12:08.398 "is_configured": true, 00:12:08.398 "data_offset": 0, 00:12:08.398 "data_size": 65536 00:12:08.398 } 00:12:08.398 ] 00:12:08.398 }' 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.398 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.658 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.658 23:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:08.658 23:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.658 23:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.658 [2024-12-06 23:46:19.970832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.658 [2024-12-06 23:46:20.040634] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:08.658 [2024-12-06 23:46:20.040823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.658 [2024-12-06 23:46:20.040850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.658 [2024-12-06 23:46:20.040864] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.658 "name": "raid_bdev1", 00:12:08.658 "uuid": "5e0ac4e3-9ef1-46d8-8148-ef92914f08d0", 00:12:08.658 "strip_size_kb": 0, 00:12:08.658 "state": "online", 00:12:08.658 "raid_level": "raid1", 00:12:08.658 "superblock": false, 00:12:08.658 "num_base_bdevs": 2, 00:12:08.658 "num_base_bdevs_discovered": 1, 00:12:08.658 "num_base_bdevs_operational": 1, 00:12:08.658 "base_bdevs_list": [ 00:12:08.658 { 00:12:08.658 "name": null, 00:12:08.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.658 "is_configured": false, 00:12:08.658 "data_offset": 0, 00:12:08.658 "data_size": 65536 00:12:08.658 }, 00:12:08.658 { 00:12:08.658 "name": "BaseBdev2", 00:12:08.658 "uuid": "efd89226-f3db-5c5c-a153-b4ce120099ae", 00:12:08.658 "is_configured": true, 00:12:08.658 "data_offset": 0, 00:12:08.658 "data_size": 65536 00:12:08.658 } 00:12:08.658 ] 00:12:08.658 }' 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.658 23:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.227 "name": "raid_bdev1", 00:12:09.227 "uuid": "5e0ac4e3-9ef1-46d8-8148-ef92914f08d0", 00:12:09.227 "strip_size_kb": 0, 00:12:09.227 "state": "online", 00:12:09.227 "raid_level": "raid1", 00:12:09.227 "superblock": false, 00:12:09.227 "num_base_bdevs": 2, 00:12:09.227 "num_base_bdevs_discovered": 1, 00:12:09.227 "num_base_bdevs_operational": 1, 00:12:09.227 "base_bdevs_list": [ 00:12:09.227 { 00:12:09.227 "name": null, 00:12:09.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.227 "is_configured": false, 00:12:09.227 "data_offset": 0, 00:12:09.227 "data_size": 65536 00:12:09.227 }, 00:12:09.227 { 00:12:09.227 "name": "BaseBdev2", 00:12:09.227 "uuid": "efd89226-f3db-5c5c-a153-b4ce120099ae", 00:12:09.227 "is_configured": true, 00:12:09.227 "data_offset": 0, 00:12:09.227 "data_size": 65536 00:12:09.227 } 00:12:09.227 ] 00:12:09.227 }' 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.227 [2024-12-06 23:46:20.704175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:09.227 [2024-12-06 23:46:20.725113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.227 23:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:09.227 [2024-12-06 23:46:20.727644] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:10.252 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.252 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.252 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.252 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.252 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.252 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.252 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.252 23:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.252 23:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.252 23:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.252 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.252 "name": "raid_bdev1", 00:12:10.252 "uuid": "5e0ac4e3-9ef1-46d8-8148-ef92914f08d0", 00:12:10.252 "strip_size_kb": 0, 00:12:10.252 "state": "online", 00:12:10.252 "raid_level": "raid1", 00:12:10.252 "superblock": false, 00:12:10.252 "num_base_bdevs": 2, 00:12:10.252 "num_base_bdevs_discovered": 2, 00:12:10.252 "num_base_bdevs_operational": 2, 00:12:10.252 "process": { 00:12:10.252 "type": "rebuild", 00:12:10.252 "target": "spare", 00:12:10.252 "progress": { 00:12:10.252 "blocks": 20480, 00:12:10.252 "percent": 31 00:12:10.252 } 00:12:10.252 }, 00:12:10.252 "base_bdevs_list": [ 00:12:10.252 { 00:12:10.252 "name": "spare", 00:12:10.252 "uuid": "9199f891-62a6-5ad4-adbe-93aa59c99d41", 00:12:10.252 "is_configured": true, 00:12:10.252 "data_offset": 0, 00:12:10.252 "data_size": 65536 00:12:10.252 }, 00:12:10.252 { 00:12:10.252 "name": "BaseBdev2", 00:12:10.252 "uuid": "efd89226-f3db-5c5c-a153-b4ce120099ae", 00:12:10.252 "is_configured": true, 00:12:10.252 "data_offset": 0, 00:12:10.252 "data_size": 65536 00:12:10.252 } 00:12:10.252 ] 00:12:10.252 }' 00:12:10.252 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=370 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.512 "name": "raid_bdev1", 00:12:10.512 "uuid": "5e0ac4e3-9ef1-46d8-8148-ef92914f08d0", 00:12:10.512 "strip_size_kb": 0, 00:12:10.512 "state": "online", 00:12:10.512 "raid_level": "raid1", 00:12:10.512 "superblock": false, 00:12:10.512 "num_base_bdevs": 2, 00:12:10.512 "num_base_bdevs_discovered": 2, 00:12:10.512 "num_base_bdevs_operational": 2, 00:12:10.512 "process": { 00:12:10.512 "type": "rebuild", 00:12:10.512 "target": "spare", 00:12:10.512 "progress": { 00:12:10.512 "blocks": 22528, 00:12:10.512 "percent": 34 00:12:10.512 } 00:12:10.512 }, 00:12:10.512 "base_bdevs_list": [ 00:12:10.512 { 00:12:10.512 "name": "spare", 00:12:10.512 "uuid": "9199f891-62a6-5ad4-adbe-93aa59c99d41", 00:12:10.512 "is_configured": true, 00:12:10.512 "data_offset": 0, 00:12:10.512 "data_size": 65536 00:12:10.512 }, 00:12:10.512 { 00:12:10.512 "name": "BaseBdev2", 00:12:10.512 "uuid": "efd89226-f3db-5c5c-a153-b4ce120099ae", 00:12:10.512 "is_configured": true, 00:12:10.512 "data_offset": 0, 00:12:10.512 "data_size": 65536 00:12:10.512 } 00:12:10.512 ] 00:12:10.512 }' 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.512 23:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.512 23:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.512 23:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:11.451 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:11.451 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.451 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.451 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.451 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.451 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.712 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.712 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.712 23:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.712 23:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.712 23:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.712 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.712 "name": "raid_bdev1", 00:12:11.712 "uuid": "5e0ac4e3-9ef1-46d8-8148-ef92914f08d0", 00:12:11.712 "strip_size_kb": 0, 00:12:11.712 "state": "online", 00:12:11.712 "raid_level": "raid1", 00:12:11.712 "superblock": false, 00:12:11.712 "num_base_bdevs": 2, 00:12:11.712 "num_base_bdevs_discovered": 2, 00:12:11.712 "num_base_bdevs_operational": 2, 00:12:11.712 "process": { 00:12:11.712 "type": "rebuild", 00:12:11.712 "target": "spare", 00:12:11.712 "progress": { 00:12:11.712 "blocks": 45056, 00:12:11.712 "percent": 68 00:12:11.712 } 00:12:11.712 }, 00:12:11.712 "base_bdevs_list": [ 00:12:11.712 { 00:12:11.712 "name": "spare", 00:12:11.712 "uuid": "9199f891-62a6-5ad4-adbe-93aa59c99d41", 00:12:11.712 "is_configured": true, 00:12:11.712 "data_offset": 0, 00:12:11.712 "data_size": 65536 00:12:11.712 }, 00:12:11.712 { 00:12:11.712 "name": "BaseBdev2", 00:12:11.712 "uuid": "efd89226-f3db-5c5c-a153-b4ce120099ae", 00:12:11.712 "is_configured": true, 00:12:11.712 "data_offset": 0, 00:12:11.712 "data_size": 65536 00:12:11.712 } 00:12:11.712 ] 00:12:11.712 }' 00:12:11.712 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.712 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.712 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.712 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.712 23:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:12.653 [2024-12-06 23:46:23.954198] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:12.654 [2024-12-06 23:46:23.954396] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:12.654 [2024-12-06 23:46:23.954483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.654 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:12.654 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.654 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.654 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.654 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.654 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.654 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.654 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.654 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.654 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.654 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.913 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.913 "name": "raid_bdev1", 00:12:12.913 "uuid": "5e0ac4e3-9ef1-46d8-8148-ef92914f08d0", 00:12:12.913 "strip_size_kb": 0, 00:12:12.913 "state": "online", 00:12:12.913 "raid_level": "raid1", 00:12:12.913 "superblock": false, 00:12:12.914 "num_base_bdevs": 2, 00:12:12.914 "num_base_bdevs_discovered": 2, 00:12:12.914 "num_base_bdevs_operational": 2, 00:12:12.914 "base_bdevs_list": [ 00:12:12.914 { 00:12:12.914 "name": "spare", 00:12:12.914 "uuid": "9199f891-62a6-5ad4-adbe-93aa59c99d41", 00:12:12.914 "is_configured": true, 00:12:12.914 "data_offset": 0, 00:12:12.914 "data_size": 65536 00:12:12.914 }, 00:12:12.914 { 00:12:12.914 "name": "BaseBdev2", 00:12:12.914 "uuid": "efd89226-f3db-5c5c-a153-b4ce120099ae", 00:12:12.914 "is_configured": true, 00:12:12.914 "data_offset": 0, 00:12:12.914 "data_size": 65536 00:12:12.914 } 00:12:12.914 ] 00:12:12.914 }' 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.914 "name": "raid_bdev1", 00:12:12.914 "uuid": "5e0ac4e3-9ef1-46d8-8148-ef92914f08d0", 00:12:12.914 "strip_size_kb": 0, 00:12:12.914 "state": "online", 00:12:12.914 "raid_level": "raid1", 00:12:12.914 "superblock": false, 00:12:12.914 "num_base_bdevs": 2, 00:12:12.914 "num_base_bdevs_discovered": 2, 00:12:12.914 "num_base_bdevs_operational": 2, 00:12:12.914 "base_bdevs_list": [ 00:12:12.914 { 00:12:12.914 "name": "spare", 00:12:12.914 "uuid": "9199f891-62a6-5ad4-adbe-93aa59c99d41", 00:12:12.914 "is_configured": true, 00:12:12.914 "data_offset": 0, 00:12:12.914 "data_size": 65536 00:12:12.914 }, 00:12:12.914 { 00:12:12.914 "name": "BaseBdev2", 00:12:12.914 "uuid": "efd89226-f3db-5c5c-a153-b4ce120099ae", 00:12:12.914 "is_configured": true, 00:12:12.914 "data_offset": 0, 00:12:12.914 "data_size": 65536 00:12:12.914 } 00:12:12.914 ] 00:12:12.914 }' 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.914 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.174 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.174 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.174 "name": "raid_bdev1", 00:12:13.174 "uuid": "5e0ac4e3-9ef1-46d8-8148-ef92914f08d0", 00:12:13.174 "strip_size_kb": 0, 00:12:13.174 "state": "online", 00:12:13.174 "raid_level": "raid1", 00:12:13.174 "superblock": false, 00:12:13.174 "num_base_bdevs": 2, 00:12:13.174 "num_base_bdevs_discovered": 2, 00:12:13.174 "num_base_bdevs_operational": 2, 00:12:13.174 "base_bdevs_list": [ 00:12:13.174 { 00:12:13.174 "name": "spare", 00:12:13.174 "uuid": "9199f891-62a6-5ad4-adbe-93aa59c99d41", 00:12:13.174 "is_configured": true, 00:12:13.174 "data_offset": 0, 00:12:13.174 "data_size": 65536 00:12:13.174 }, 00:12:13.174 { 00:12:13.174 "name": "BaseBdev2", 00:12:13.174 "uuid": "efd89226-f3db-5c5c-a153-b4ce120099ae", 00:12:13.174 "is_configured": true, 00:12:13.174 "data_offset": 0, 00:12:13.174 "data_size": 65536 00:12:13.174 } 00:12:13.174 ] 00:12:13.174 }' 00:12:13.174 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.174 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.434 [2024-12-06 23:46:24.899746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.434 [2024-12-06 23:46:24.899877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.434 [2024-12-06 23:46:24.899990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.434 [2024-12-06 23:46:24.900067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.434 [2024-12-06 23:46:24.900077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:13.434 23:46:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:13.694 /dev/nbd0 00:12:13.694 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:13.694 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.695 1+0 records in 00:12:13.695 1+0 records out 00:12:13.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430179 s, 9.5 MB/s 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:13.695 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:13.954 /dev/nbd1 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.954 1+0 records in 00:12:13.954 1+0 records out 00:12:13.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051457 s, 8.0 MB/s 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:13.954 23:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:14.213 23:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:14.213 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.213 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:14.213 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:14.213 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:14.213 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.213 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:14.471 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:14.471 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:14.471 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:14.471 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.471 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.471 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:14.471 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:14.471 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.471 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.471 23:46:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:14.729 23:46:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:14.729 23:46:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:14.729 23:46:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:14.729 23:46:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.729 23:46:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.729 23:46:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:14.729 23:46:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75228 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75228 ']' 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75228 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75228 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.730 killing process with pid 75228 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75228' 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75228 00:12:14.730 Received shutdown signal, test time was about 60.000000 seconds 00:12:14.730 00:12:14.730 Latency(us) 00:12:14.730 [2024-12-06T23:46:26.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.730 [2024-12-06T23:46:26.293Z] =================================================================================================================== 00:12:14.730 [2024-12-06T23:46:26.293Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:14.730 [2024-12-06 23:46:26.082394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.730 23:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75228 00:12:14.989 [2024-12-06 23:46:26.376275] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:15.927 23:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:15.927 00:12:15.927 real 0m15.797s 00:12:15.927 user 0m17.462s 00:12:15.927 sys 0m3.129s 00:12:15.927 23:46:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.927 23:46:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.927 ************************************ 00:12:15.927 END TEST raid_rebuild_test 00:12:15.927 ************************************ 00:12:16.187 23:46:27 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:16.187 23:46:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:16.187 23:46:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.187 23:46:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.187 ************************************ 00:12:16.187 START TEST raid_rebuild_test_sb 00:12:16.187 ************************************ 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75651 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75651 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75651 ']' 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.187 23:46:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.187 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:16.187 Zero copy mechanism will not be used. 00:12:16.187 [2024-12-06 23:46:27.601146] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:12:16.187 [2024-12-06 23:46:27.601265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75651 ] 00:12:16.446 [2024-12-06 23:46:27.774767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.446 [2024-12-06 23:46:27.882269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.704 [2024-12-06 23:46:28.057132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.704 [2024-12-06 23:46:28.057199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.964 BaseBdev1_malloc 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.964 [2024-12-06 23:46:28.472853] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:16.964 [2024-12-06 23:46:28.472911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.964 [2024-12-06 23:46:28.472933] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:16.964 [2024-12-06 23:46:28.472944] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.964 [2024-12-06 23:46:28.474948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.964 [2024-12-06 23:46:28.474985] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.964 BaseBdev1 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.964 BaseBdev2_malloc 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.964 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.224 [2024-12-06 23:46:28.528960] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:17.224 [2024-12-06 23:46:28.529033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.224 [2024-12-06 23:46:28.529056] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:17.224 [2024-12-06 23:46:28.529066] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.224 [2024-12-06 23:46:28.531105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.224 [2024-12-06 23:46:28.531147] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:17.224 BaseBdev2 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.224 spare_malloc 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.224 spare_delay 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.224 [2024-12-06 23:46:28.624700] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:17.224 [2024-12-06 23:46:28.624755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.224 [2024-12-06 23:46:28.624774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:17.224 [2024-12-06 23:46:28.624785] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.224 [2024-12-06 23:46:28.626768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.224 [2024-12-06 23:46:28.626802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:17.224 spare 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.224 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.225 [2024-12-06 23:46:28.636745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.225 [2024-12-06 23:46:28.638502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.225 [2024-12-06 23:46:28.638680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:17.225 [2024-12-06 23:46:28.638702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:17.225 [2024-12-06 23:46:28.638952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:17.225 [2024-12-06 23:46:28.639121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:17.225 [2024-12-06 23:46:28.639137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:17.225 [2024-12-06 23:46:28.639283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.225 "name": "raid_bdev1", 00:12:17.225 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:17.225 "strip_size_kb": 0, 00:12:17.225 "state": "online", 00:12:17.225 "raid_level": "raid1", 00:12:17.225 "superblock": true, 00:12:17.225 "num_base_bdevs": 2, 00:12:17.225 "num_base_bdevs_discovered": 2, 00:12:17.225 "num_base_bdevs_operational": 2, 00:12:17.225 "base_bdevs_list": [ 00:12:17.225 { 00:12:17.225 "name": "BaseBdev1", 00:12:17.225 "uuid": "03359972-39c7-50a1-97bf-1dc2119b4608", 00:12:17.225 "is_configured": true, 00:12:17.225 "data_offset": 2048, 00:12:17.225 "data_size": 63488 00:12:17.225 }, 00:12:17.225 { 00:12:17.225 "name": "BaseBdev2", 00:12:17.225 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:17.225 "is_configured": true, 00:12:17.225 "data_offset": 2048, 00:12:17.225 "data_size": 63488 00:12:17.225 } 00:12:17.225 ] 00:12:17.225 }' 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.225 23:46:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:17.795 [2024-12-06 23:46:29.096224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:17.795 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:18.056 [2024-12-06 23:46:29.359543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:18.056 /dev/nbd0 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.056 1+0 records in 00:12:18.056 1+0 records out 00:12:18.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204757 s, 20.0 MB/s 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:18.056 23:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:22.278 63488+0 records in 00:12:22.278 63488+0 records out 00:12:22.278 32505856 bytes (33 MB, 31 MiB) copied, 4.09356 s, 7.9 MB/s 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:22.278 [2024-12-06 23:46:33.721197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 [2024-12-06 23:46:33.753137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.278 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.279 "name": "raid_bdev1", 00:12:22.279 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:22.279 "strip_size_kb": 0, 00:12:22.279 "state": "online", 00:12:22.279 "raid_level": "raid1", 00:12:22.279 "superblock": true, 00:12:22.279 "num_base_bdevs": 2, 00:12:22.279 "num_base_bdevs_discovered": 1, 00:12:22.279 "num_base_bdevs_operational": 1, 00:12:22.279 "base_bdevs_list": [ 00:12:22.279 { 00:12:22.279 "name": null, 00:12:22.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.279 "is_configured": false, 00:12:22.279 "data_offset": 0, 00:12:22.279 "data_size": 63488 00:12:22.279 }, 00:12:22.279 { 00:12:22.279 "name": "BaseBdev2", 00:12:22.279 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:22.279 "is_configured": true, 00:12:22.279 "data_offset": 2048, 00:12:22.279 "data_size": 63488 00:12:22.279 } 00:12:22.279 ] 00:12:22.279 }' 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.279 23:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.849 23:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:22.849 23:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.849 23:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.849 [2024-12-06 23:46:34.184403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:22.849 [2024-12-06 23:46:34.200686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:22.849 23:46:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.849 [2024-12-06 23:46:34.202515] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.849 23:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:23.787 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.787 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.787 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.787 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.787 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.787 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.787 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.787 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.787 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.787 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.787 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.787 "name": "raid_bdev1", 00:12:23.787 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:23.787 "strip_size_kb": 0, 00:12:23.787 "state": "online", 00:12:23.787 "raid_level": "raid1", 00:12:23.787 "superblock": true, 00:12:23.787 "num_base_bdevs": 2, 00:12:23.787 "num_base_bdevs_discovered": 2, 00:12:23.787 "num_base_bdevs_operational": 2, 00:12:23.787 "process": { 00:12:23.787 "type": "rebuild", 00:12:23.787 "target": "spare", 00:12:23.787 "progress": { 00:12:23.787 "blocks": 20480, 00:12:23.787 "percent": 32 00:12:23.787 } 00:12:23.787 }, 00:12:23.787 "base_bdevs_list": [ 00:12:23.787 { 00:12:23.787 "name": "spare", 00:12:23.787 "uuid": "87a6fe2c-607d-5529-86a6-a96cc6ed3ac6", 00:12:23.787 "is_configured": true, 00:12:23.787 "data_offset": 2048, 00:12:23.787 "data_size": 63488 00:12:23.787 }, 00:12:23.787 { 00:12:23.787 "name": "BaseBdev2", 00:12:23.787 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:23.788 "is_configured": true, 00:12:23.788 "data_offset": 2048, 00:12:23.788 "data_size": 63488 00:12:23.788 } 00:12:23.788 ] 00:12:23.788 }' 00:12:23.788 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.788 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.788 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.788 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.788 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:23.788 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.788 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.788 [2024-12-06 23:46:35.342495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:24.048 [2024-12-06 23:46:35.407511] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:24.048 [2024-12-06 23:46:35.407568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.048 [2024-12-06 23:46:35.407581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:24.048 [2024-12-06 23:46:35.407592] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.048 "name": "raid_bdev1", 00:12:24.048 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:24.048 "strip_size_kb": 0, 00:12:24.048 "state": "online", 00:12:24.048 "raid_level": "raid1", 00:12:24.048 "superblock": true, 00:12:24.048 "num_base_bdevs": 2, 00:12:24.048 "num_base_bdevs_discovered": 1, 00:12:24.048 "num_base_bdevs_operational": 1, 00:12:24.048 "base_bdevs_list": [ 00:12:24.048 { 00:12:24.048 "name": null, 00:12:24.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.048 "is_configured": false, 00:12:24.048 "data_offset": 0, 00:12:24.048 "data_size": 63488 00:12:24.048 }, 00:12:24.048 { 00:12:24.048 "name": "BaseBdev2", 00:12:24.048 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:24.048 "is_configured": true, 00:12:24.048 "data_offset": 2048, 00:12:24.048 "data_size": 63488 00:12:24.048 } 00:12:24.048 ] 00:12:24.048 }' 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.048 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.618 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:24.618 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.618 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:24.618 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:24.618 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.618 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.618 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.618 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.618 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.618 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.618 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.618 "name": "raid_bdev1", 00:12:24.618 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:24.619 "strip_size_kb": 0, 00:12:24.619 "state": "online", 00:12:24.619 "raid_level": "raid1", 00:12:24.619 "superblock": true, 00:12:24.619 "num_base_bdevs": 2, 00:12:24.619 "num_base_bdevs_discovered": 1, 00:12:24.619 "num_base_bdevs_operational": 1, 00:12:24.619 "base_bdevs_list": [ 00:12:24.619 { 00:12:24.619 "name": null, 00:12:24.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.619 "is_configured": false, 00:12:24.619 "data_offset": 0, 00:12:24.619 "data_size": 63488 00:12:24.619 }, 00:12:24.619 { 00:12:24.619 "name": "BaseBdev2", 00:12:24.619 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:24.619 "is_configured": true, 00:12:24.619 "data_offset": 2048, 00:12:24.619 "data_size": 63488 00:12:24.619 } 00:12:24.619 ] 00:12:24.619 }' 00:12:24.619 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.619 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.619 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.619 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.619 23:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:24.619 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.619 23:46:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.619 [2024-12-06 23:46:36.001176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.619 [2024-12-06 23:46:36.018986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:24.619 23:46:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.619 23:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:24.619 [2024-12-06 23:46:36.021189] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.557 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.557 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.557 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.557 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.557 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.557 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.557 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.557 23:46:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.557 23:46:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.557 23:46:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.557 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.557 "name": "raid_bdev1", 00:12:25.557 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:25.557 "strip_size_kb": 0, 00:12:25.557 "state": "online", 00:12:25.557 "raid_level": "raid1", 00:12:25.557 "superblock": true, 00:12:25.557 "num_base_bdevs": 2, 00:12:25.557 "num_base_bdevs_discovered": 2, 00:12:25.557 "num_base_bdevs_operational": 2, 00:12:25.557 "process": { 00:12:25.557 "type": "rebuild", 00:12:25.557 "target": "spare", 00:12:25.557 "progress": { 00:12:25.557 "blocks": 20480, 00:12:25.557 "percent": 32 00:12:25.557 } 00:12:25.557 }, 00:12:25.557 "base_bdevs_list": [ 00:12:25.557 { 00:12:25.557 "name": "spare", 00:12:25.557 "uuid": "87a6fe2c-607d-5529-86a6-a96cc6ed3ac6", 00:12:25.557 "is_configured": true, 00:12:25.557 "data_offset": 2048, 00:12:25.557 "data_size": 63488 00:12:25.557 }, 00:12:25.557 { 00:12:25.557 "name": "BaseBdev2", 00:12:25.557 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:25.557 "is_configured": true, 00:12:25.557 "data_offset": 2048, 00:12:25.557 "data_size": 63488 00:12:25.557 } 00:12:25.557 ] 00:12:25.557 }' 00:12:25.557 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:25.817 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=386 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.817 "name": "raid_bdev1", 00:12:25.817 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:25.817 "strip_size_kb": 0, 00:12:25.817 "state": "online", 00:12:25.817 "raid_level": "raid1", 00:12:25.817 "superblock": true, 00:12:25.817 "num_base_bdevs": 2, 00:12:25.817 "num_base_bdevs_discovered": 2, 00:12:25.817 "num_base_bdevs_operational": 2, 00:12:25.817 "process": { 00:12:25.817 "type": "rebuild", 00:12:25.817 "target": "spare", 00:12:25.817 "progress": { 00:12:25.817 "blocks": 22528, 00:12:25.817 "percent": 35 00:12:25.817 } 00:12:25.817 }, 00:12:25.817 "base_bdevs_list": [ 00:12:25.817 { 00:12:25.817 "name": "spare", 00:12:25.817 "uuid": "87a6fe2c-607d-5529-86a6-a96cc6ed3ac6", 00:12:25.817 "is_configured": true, 00:12:25.817 "data_offset": 2048, 00:12:25.817 "data_size": 63488 00:12:25.817 }, 00:12:25.817 { 00:12:25.817 "name": "BaseBdev2", 00:12:25.817 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:25.817 "is_configured": true, 00:12:25.817 "data_offset": 2048, 00:12:25.817 "data_size": 63488 00:12:25.817 } 00:12:25.817 ] 00:12:25.817 }' 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.817 23:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.755 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.755 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.755 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.755 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.755 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.755 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.755 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.755 23:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.755 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.755 23:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.014 23:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.014 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.014 "name": "raid_bdev1", 00:12:27.014 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:27.014 "strip_size_kb": 0, 00:12:27.014 "state": "online", 00:12:27.014 "raid_level": "raid1", 00:12:27.014 "superblock": true, 00:12:27.014 "num_base_bdevs": 2, 00:12:27.014 "num_base_bdevs_discovered": 2, 00:12:27.014 "num_base_bdevs_operational": 2, 00:12:27.014 "process": { 00:12:27.014 "type": "rebuild", 00:12:27.014 "target": "spare", 00:12:27.014 "progress": { 00:12:27.014 "blocks": 45056, 00:12:27.014 "percent": 70 00:12:27.014 } 00:12:27.014 }, 00:12:27.014 "base_bdevs_list": [ 00:12:27.014 { 00:12:27.014 "name": "spare", 00:12:27.014 "uuid": "87a6fe2c-607d-5529-86a6-a96cc6ed3ac6", 00:12:27.014 "is_configured": true, 00:12:27.014 "data_offset": 2048, 00:12:27.014 "data_size": 63488 00:12:27.014 }, 00:12:27.014 { 00:12:27.014 "name": "BaseBdev2", 00:12:27.014 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:27.014 "is_configured": true, 00:12:27.014 "data_offset": 2048, 00:12:27.014 "data_size": 63488 00:12:27.014 } 00:12:27.014 ] 00:12:27.014 }' 00:12:27.014 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.014 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.014 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.014 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.014 23:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:27.952 [2024-12-06 23:46:39.145184] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:27.952 [2024-12-06 23:46:39.145286] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:27.952 [2024-12-06 23:46:39.145434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.952 "name": "raid_bdev1", 00:12:27.952 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:27.952 "strip_size_kb": 0, 00:12:27.952 "state": "online", 00:12:27.952 "raid_level": "raid1", 00:12:27.952 "superblock": true, 00:12:27.952 "num_base_bdevs": 2, 00:12:27.952 "num_base_bdevs_discovered": 2, 00:12:27.952 "num_base_bdevs_operational": 2, 00:12:27.952 "base_bdevs_list": [ 00:12:27.952 { 00:12:27.952 "name": "spare", 00:12:27.952 "uuid": "87a6fe2c-607d-5529-86a6-a96cc6ed3ac6", 00:12:27.952 "is_configured": true, 00:12:27.952 "data_offset": 2048, 00:12:27.952 "data_size": 63488 00:12:27.952 }, 00:12:27.952 { 00:12:27.952 "name": "BaseBdev2", 00:12:27.952 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:27.952 "is_configured": true, 00:12:27.952 "data_offset": 2048, 00:12:27.952 "data_size": 63488 00:12:27.952 } 00:12:27.952 ] 00:12:27.952 }' 00:12:27.952 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.211 "name": "raid_bdev1", 00:12:28.211 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:28.211 "strip_size_kb": 0, 00:12:28.211 "state": "online", 00:12:28.211 "raid_level": "raid1", 00:12:28.211 "superblock": true, 00:12:28.211 "num_base_bdevs": 2, 00:12:28.211 "num_base_bdevs_discovered": 2, 00:12:28.211 "num_base_bdevs_operational": 2, 00:12:28.211 "base_bdevs_list": [ 00:12:28.211 { 00:12:28.211 "name": "spare", 00:12:28.211 "uuid": "87a6fe2c-607d-5529-86a6-a96cc6ed3ac6", 00:12:28.211 "is_configured": true, 00:12:28.211 "data_offset": 2048, 00:12:28.211 "data_size": 63488 00:12:28.211 }, 00:12:28.211 { 00:12:28.211 "name": "BaseBdev2", 00:12:28.211 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:28.211 "is_configured": true, 00:12:28.211 "data_offset": 2048, 00:12:28.211 "data_size": 63488 00:12:28.211 } 00:12:28.211 ] 00:12:28.211 }' 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.211 "name": "raid_bdev1", 00:12:28.211 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:28.211 "strip_size_kb": 0, 00:12:28.211 "state": "online", 00:12:28.211 "raid_level": "raid1", 00:12:28.211 "superblock": true, 00:12:28.211 "num_base_bdevs": 2, 00:12:28.211 "num_base_bdevs_discovered": 2, 00:12:28.211 "num_base_bdevs_operational": 2, 00:12:28.211 "base_bdevs_list": [ 00:12:28.211 { 00:12:28.211 "name": "spare", 00:12:28.211 "uuid": "87a6fe2c-607d-5529-86a6-a96cc6ed3ac6", 00:12:28.211 "is_configured": true, 00:12:28.211 "data_offset": 2048, 00:12:28.211 "data_size": 63488 00:12:28.211 }, 00:12:28.211 { 00:12:28.211 "name": "BaseBdev2", 00:12:28.211 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:28.211 "is_configured": true, 00:12:28.211 "data_offset": 2048, 00:12:28.211 "data_size": 63488 00:12:28.211 } 00:12:28.211 ] 00:12:28.211 }' 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.211 23:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.779 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.779 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.779 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.779 [2024-12-06 23:46:40.135351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.779 [2024-12-06 23:46:40.135394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.779 [2024-12-06 23:46:40.135489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.779 [2024-12-06 23:46:40.135570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.779 [2024-12-06 23:46:40.135582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:28.779 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.779 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:28.779 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.780 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:29.039 /dev/nbd0 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.039 1+0 records in 00:12:29.039 1+0 records out 00:12:29.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401436 s, 10.2 MB/s 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:29.039 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:29.299 /dev/nbd1 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.299 1+0 records in 00:12:29.299 1+0 records out 00:12:29.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272896 s, 15.0 MB/s 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.299 23:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:29.558 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.558 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.558 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.558 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.558 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.558 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.558 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:29.558 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.558 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.558 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.817 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.817 [2024-12-06 23:46:41.275497] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:29.817 [2024-12-06 23:46:41.275557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.817 [2024-12-06 23:46:41.275584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:29.817 [2024-12-06 23:46:41.275594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.817 [2024-12-06 23:46:41.277909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.817 [2024-12-06 23:46:41.277940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:29.817 [2024-12-06 23:46:41.278022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:29.817 [2024-12-06 23:46:41.278077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:29.818 [2024-12-06 23:46:41.278216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.818 spare 00:12:29.818 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.818 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:29.818 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.818 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.818 [2024-12-06 23:46:41.378138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:29.818 [2024-12-06 23:46:41.378179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.818 [2024-12-06 23:46:41.378443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:29.818 [2024-12-06 23:46:41.378623] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:29.818 [2024-12-06 23:46:41.378644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:29.818 [2024-12-06 23:46:41.378800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.077 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.077 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.077 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.077 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.077 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.077 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.078 "name": "raid_bdev1", 00:12:30.078 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:30.078 "strip_size_kb": 0, 00:12:30.078 "state": "online", 00:12:30.078 "raid_level": "raid1", 00:12:30.078 "superblock": true, 00:12:30.078 "num_base_bdevs": 2, 00:12:30.078 "num_base_bdevs_discovered": 2, 00:12:30.078 "num_base_bdevs_operational": 2, 00:12:30.078 "base_bdevs_list": [ 00:12:30.078 { 00:12:30.078 "name": "spare", 00:12:30.078 "uuid": "87a6fe2c-607d-5529-86a6-a96cc6ed3ac6", 00:12:30.078 "is_configured": true, 00:12:30.078 "data_offset": 2048, 00:12:30.078 "data_size": 63488 00:12:30.078 }, 00:12:30.078 { 00:12:30.078 "name": "BaseBdev2", 00:12:30.078 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:30.078 "is_configured": true, 00:12:30.078 "data_offset": 2048, 00:12:30.078 "data_size": 63488 00:12:30.078 } 00:12:30.078 ] 00:12:30.078 }' 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.078 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.338 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.338 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.338 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.338 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.338 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.338 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.338 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.338 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.338 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.338 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.338 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.338 "name": "raid_bdev1", 00:12:30.338 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:30.338 "strip_size_kb": 0, 00:12:30.338 "state": "online", 00:12:30.338 "raid_level": "raid1", 00:12:30.338 "superblock": true, 00:12:30.338 "num_base_bdevs": 2, 00:12:30.338 "num_base_bdevs_discovered": 2, 00:12:30.338 "num_base_bdevs_operational": 2, 00:12:30.338 "base_bdevs_list": [ 00:12:30.338 { 00:12:30.338 "name": "spare", 00:12:30.338 "uuid": "87a6fe2c-607d-5529-86a6-a96cc6ed3ac6", 00:12:30.338 "is_configured": true, 00:12:30.338 "data_offset": 2048, 00:12:30.338 "data_size": 63488 00:12:30.338 }, 00:12:30.338 { 00:12:30.338 "name": "BaseBdev2", 00:12:30.338 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:30.338 "is_configured": true, 00:12:30.338 "data_offset": 2048, 00:12:30.338 "data_size": 63488 00:12:30.338 } 00:12:30.338 ] 00:12:30.338 }' 00:12:30.338 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.599 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.599 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.599 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.599 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.599 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.599 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.599 23:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:30.599 23:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.599 [2024-12-06 23:46:42.022332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.599 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.599 "name": "raid_bdev1", 00:12:30.599 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:30.600 "strip_size_kb": 0, 00:12:30.600 "state": "online", 00:12:30.600 "raid_level": "raid1", 00:12:30.600 "superblock": true, 00:12:30.600 "num_base_bdevs": 2, 00:12:30.600 "num_base_bdevs_discovered": 1, 00:12:30.600 "num_base_bdevs_operational": 1, 00:12:30.600 "base_bdevs_list": [ 00:12:30.600 { 00:12:30.600 "name": null, 00:12:30.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.600 "is_configured": false, 00:12:30.600 "data_offset": 0, 00:12:30.600 "data_size": 63488 00:12:30.600 }, 00:12:30.600 { 00:12:30.600 "name": "BaseBdev2", 00:12:30.600 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:30.600 "is_configured": true, 00:12:30.600 "data_offset": 2048, 00:12:30.600 "data_size": 63488 00:12:30.600 } 00:12:30.600 ] 00:12:30.600 }' 00:12:30.600 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.600 23:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.170 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:31.170 23:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.170 23:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.170 [2024-12-06 23:46:42.513571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.170 [2024-12-06 23:46:42.513832] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:31.170 [2024-12-06 23:46:42.513849] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:31.170 [2024-12-06 23:46:42.513885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.170 [2024-12-06 23:46:42.529988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:31.170 23:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.170 23:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:31.170 [2024-12-06 23:46:42.531811] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.109 "name": "raid_bdev1", 00:12:32.109 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:32.109 "strip_size_kb": 0, 00:12:32.109 "state": "online", 00:12:32.109 "raid_level": "raid1", 00:12:32.109 "superblock": true, 00:12:32.109 "num_base_bdevs": 2, 00:12:32.109 "num_base_bdevs_discovered": 2, 00:12:32.109 "num_base_bdevs_operational": 2, 00:12:32.109 "process": { 00:12:32.109 "type": "rebuild", 00:12:32.109 "target": "spare", 00:12:32.109 "progress": { 00:12:32.109 "blocks": 20480, 00:12:32.109 "percent": 32 00:12:32.109 } 00:12:32.109 }, 00:12:32.109 "base_bdevs_list": [ 00:12:32.109 { 00:12:32.109 "name": "spare", 00:12:32.109 "uuid": "87a6fe2c-607d-5529-86a6-a96cc6ed3ac6", 00:12:32.109 "is_configured": true, 00:12:32.109 "data_offset": 2048, 00:12:32.109 "data_size": 63488 00:12:32.109 }, 00:12:32.109 { 00:12:32.109 "name": "BaseBdev2", 00:12:32.109 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:32.109 "is_configured": true, 00:12:32.109 "data_offset": 2048, 00:12:32.109 "data_size": 63488 00:12:32.109 } 00:12:32.109 ] 00:12:32.109 }' 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:32.109 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.369 [2024-12-06 23:46:43.695360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.369 [2024-12-06 23:46:43.737564] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:32.369 [2024-12-06 23:46:43.737641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.369 [2024-12-06 23:46:43.737656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.369 [2024-12-06 23:46:43.737665] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.369 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.369 "name": "raid_bdev1", 00:12:32.369 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:32.369 "strip_size_kb": 0, 00:12:32.369 "state": "online", 00:12:32.369 "raid_level": "raid1", 00:12:32.369 "superblock": true, 00:12:32.369 "num_base_bdevs": 2, 00:12:32.369 "num_base_bdevs_discovered": 1, 00:12:32.369 "num_base_bdevs_operational": 1, 00:12:32.369 "base_bdevs_list": [ 00:12:32.369 { 00:12:32.369 "name": null, 00:12:32.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.369 "is_configured": false, 00:12:32.369 "data_offset": 0, 00:12:32.369 "data_size": 63488 00:12:32.369 }, 00:12:32.369 { 00:12:32.369 "name": "BaseBdev2", 00:12:32.369 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:32.369 "is_configured": true, 00:12:32.370 "data_offset": 2048, 00:12:32.370 "data_size": 63488 00:12:32.370 } 00:12:32.370 ] 00:12:32.370 }' 00:12:32.370 23:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.370 23:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.630 23:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:32.630 23:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.630 23:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.889 [2024-12-06 23:46:44.195009] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:32.889 [2024-12-06 23:46:44.195127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.889 [2024-12-06 23:46:44.195197] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:32.889 [2024-12-06 23:46:44.195231] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.889 [2024-12-06 23:46:44.195747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.889 [2024-12-06 23:46:44.195809] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:32.889 [2024-12-06 23:46:44.195931] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:32.889 [2024-12-06 23:46:44.195973] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:32.889 [2024-12-06 23:46:44.196012] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:32.889 [2024-12-06 23:46:44.196066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.889 [2024-12-06 23:46:44.211835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:32.890 spare 00:12:32.890 23:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.890 [2024-12-06 23:46:44.213627] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:32.890 23:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.828 "name": "raid_bdev1", 00:12:33.828 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:33.828 "strip_size_kb": 0, 00:12:33.828 "state": "online", 00:12:33.828 "raid_level": "raid1", 00:12:33.828 "superblock": true, 00:12:33.828 "num_base_bdevs": 2, 00:12:33.828 "num_base_bdevs_discovered": 2, 00:12:33.828 "num_base_bdevs_operational": 2, 00:12:33.828 "process": { 00:12:33.828 "type": "rebuild", 00:12:33.828 "target": "spare", 00:12:33.828 "progress": { 00:12:33.828 "blocks": 20480, 00:12:33.828 "percent": 32 00:12:33.828 } 00:12:33.828 }, 00:12:33.828 "base_bdevs_list": [ 00:12:33.828 { 00:12:33.828 "name": "spare", 00:12:33.828 "uuid": "87a6fe2c-607d-5529-86a6-a96cc6ed3ac6", 00:12:33.828 "is_configured": true, 00:12:33.828 "data_offset": 2048, 00:12:33.828 "data_size": 63488 00:12:33.828 }, 00:12:33.828 { 00:12:33.828 "name": "BaseBdev2", 00:12:33.828 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:33.828 "is_configured": true, 00:12:33.828 "data_offset": 2048, 00:12:33.828 "data_size": 63488 00:12:33.828 } 00:12:33.828 ] 00:12:33.828 }' 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.828 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.828 [2024-12-06 23:46:45.357468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.087 [2024-12-06 23:46:45.418305] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:34.087 [2024-12-06 23:46:45.418359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.087 [2024-12-06 23:46:45.418391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.087 [2024-12-06 23:46:45.418399] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.087 "name": "raid_bdev1", 00:12:34.087 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:34.087 "strip_size_kb": 0, 00:12:34.087 "state": "online", 00:12:34.087 "raid_level": "raid1", 00:12:34.087 "superblock": true, 00:12:34.087 "num_base_bdevs": 2, 00:12:34.087 "num_base_bdevs_discovered": 1, 00:12:34.087 "num_base_bdevs_operational": 1, 00:12:34.087 "base_bdevs_list": [ 00:12:34.087 { 00:12:34.087 "name": null, 00:12:34.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.087 "is_configured": false, 00:12:34.087 "data_offset": 0, 00:12:34.087 "data_size": 63488 00:12:34.087 }, 00:12:34.087 { 00:12:34.087 "name": "BaseBdev2", 00:12:34.087 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:34.087 "is_configured": true, 00:12:34.087 "data_offset": 2048, 00:12:34.087 "data_size": 63488 00:12:34.087 } 00:12:34.087 ] 00:12:34.087 }' 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.087 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.345 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:34.345 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.345 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:34.345 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:34.345 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.345 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.345 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.604 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.604 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.604 23:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.604 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.604 "name": "raid_bdev1", 00:12:34.604 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:34.604 "strip_size_kb": 0, 00:12:34.604 "state": "online", 00:12:34.604 "raid_level": "raid1", 00:12:34.604 "superblock": true, 00:12:34.604 "num_base_bdevs": 2, 00:12:34.604 "num_base_bdevs_discovered": 1, 00:12:34.604 "num_base_bdevs_operational": 1, 00:12:34.604 "base_bdevs_list": [ 00:12:34.604 { 00:12:34.604 "name": null, 00:12:34.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.604 "is_configured": false, 00:12:34.604 "data_offset": 0, 00:12:34.604 "data_size": 63488 00:12:34.604 }, 00:12:34.604 { 00:12:34.604 "name": "BaseBdev2", 00:12:34.604 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:34.604 "is_configured": true, 00:12:34.604 "data_offset": 2048, 00:12:34.604 "data_size": 63488 00:12:34.604 } 00:12:34.604 ] 00:12:34.604 }' 00:12:34.604 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.604 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:34.604 23:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.604 23:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:34.604 23:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:34.604 23:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.604 23:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.604 23:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.604 23:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:34.604 23:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.604 23:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.604 [2024-12-06 23:46:46.019299] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:34.604 [2024-12-06 23:46:46.019362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.604 [2024-12-06 23:46:46.019394] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:34.604 [2024-12-06 23:46:46.019419] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.604 [2024-12-06 23:46:46.019882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.604 [2024-12-06 23:46:46.019950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:34.604 [2024-12-06 23:46:46.020040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:34.604 [2024-12-06 23:46:46.020054] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:34.604 [2024-12-06 23:46:46.020066] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:34.604 [2024-12-06 23:46:46.020077] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:34.604 BaseBdev1 00:12:34.604 23:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.604 23:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.538 "name": "raid_bdev1", 00:12:35.538 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:35.538 "strip_size_kb": 0, 00:12:35.538 "state": "online", 00:12:35.538 "raid_level": "raid1", 00:12:35.538 "superblock": true, 00:12:35.538 "num_base_bdevs": 2, 00:12:35.538 "num_base_bdevs_discovered": 1, 00:12:35.538 "num_base_bdevs_operational": 1, 00:12:35.538 "base_bdevs_list": [ 00:12:35.538 { 00:12:35.538 "name": null, 00:12:35.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.538 "is_configured": false, 00:12:35.538 "data_offset": 0, 00:12:35.538 "data_size": 63488 00:12:35.538 }, 00:12:35.538 { 00:12:35.538 "name": "BaseBdev2", 00:12:35.538 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:35.538 "is_configured": true, 00:12:35.538 "data_offset": 2048, 00:12:35.538 "data_size": 63488 00:12:35.538 } 00:12:35.538 ] 00:12:35.538 }' 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.538 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.108 "name": "raid_bdev1", 00:12:36.108 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:36.108 "strip_size_kb": 0, 00:12:36.108 "state": "online", 00:12:36.108 "raid_level": "raid1", 00:12:36.108 "superblock": true, 00:12:36.108 "num_base_bdevs": 2, 00:12:36.108 "num_base_bdevs_discovered": 1, 00:12:36.108 "num_base_bdevs_operational": 1, 00:12:36.108 "base_bdevs_list": [ 00:12:36.108 { 00:12:36.108 "name": null, 00:12:36.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.108 "is_configured": false, 00:12:36.108 "data_offset": 0, 00:12:36.108 "data_size": 63488 00:12:36.108 }, 00:12:36.108 { 00:12:36.108 "name": "BaseBdev2", 00:12:36.108 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:36.108 "is_configured": true, 00:12:36.108 "data_offset": 2048, 00:12:36.108 "data_size": 63488 00:12:36.108 } 00:12:36.108 ] 00:12:36.108 }' 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.108 [2024-12-06 23:46:47.652780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.108 [2024-12-06 23:46:47.652958] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:36.108 [2024-12-06 23:46:47.652976] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:36.108 request: 00:12:36.108 { 00:12:36.108 "base_bdev": "BaseBdev1", 00:12:36.108 "raid_bdev": "raid_bdev1", 00:12:36.108 "method": "bdev_raid_add_base_bdev", 00:12:36.108 "req_id": 1 00:12:36.108 } 00:12:36.108 Got JSON-RPC error response 00:12:36.108 response: 00:12:36.108 { 00:12:36.108 "code": -22, 00:12:36.108 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:36.108 } 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:36.108 23:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.488 "name": "raid_bdev1", 00:12:37.488 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:37.488 "strip_size_kb": 0, 00:12:37.488 "state": "online", 00:12:37.488 "raid_level": "raid1", 00:12:37.488 "superblock": true, 00:12:37.488 "num_base_bdevs": 2, 00:12:37.488 "num_base_bdevs_discovered": 1, 00:12:37.488 "num_base_bdevs_operational": 1, 00:12:37.488 "base_bdevs_list": [ 00:12:37.488 { 00:12:37.488 "name": null, 00:12:37.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.488 "is_configured": false, 00:12:37.488 "data_offset": 0, 00:12:37.488 "data_size": 63488 00:12:37.488 }, 00:12:37.488 { 00:12:37.488 "name": "BaseBdev2", 00:12:37.488 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:37.488 "is_configured": true, 00:12:37.488 "data_offset": 2048, 00:12:37.488 "data_size": 63488 00:12:37.488 } 00:12:37.488 ] 00:12:37.488 }' 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.488 23:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.748 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.748 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.749 "name": "raid_bdev1", 00:12:37.749 "uuid": "63770bd2-bacc-4703-a5df-d8072cf451cb", 00:12:37.749 "strip_size_kb": 0, 00:12:37.749 "state": "online", 00:12:37.749 "raid_level": "raid1", 00:12:37.749 "superblock": true, 00:12:37.749 "num_base_bdevs": 2, 00:12:37.749 "num_base_bdevs_discovered": 1, 00:12:37.749 "num_base_bdevs_operational": 1, 00:12:37.749 "base_bdevs_list": [ 00:12:37.749 { 00:12:37.749 "name": null, 00:12:37.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.749 "is_configured": false, 00:12:37.749 "data_offset": 0, 00:12:37.749 "data_size": 63488 00:12:37.749 }, 00:12:37.749 { 00:12:37.749 "name": "BaseBdev2", 00:12:37.749 "uuid": "f92edb48-ac3c-539a-939b-61034b6078b9", 00:12:37.749 "is_configured": true, 00:12:37.749 "data_offset": 2048, 00:12:37.749 "data_size": 63488 00:12:37.749 } 00:12:37.749 ] 00:12:37.749 }' 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75651 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75651 ']' 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75651 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75651 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.749 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.009 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75651' 00:12:38.009 killing process with pid 75651 00:12:38.009 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75651 00:12:38.009 Received shutdown signal, test time was about 60.000000 seconds 00:12:38.009 00:12:38.009 Latency(us) 00:12:38.009 [2024-12-06T23:46:49.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.009 [2024-12-06T23:46:49.572Z] =================================================================================================================== 00:12:38.009 [2024-12-06T23:46:49.572Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:38.009 [2024-12-06 23:46:49.311366] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.009 [2024-12-06 23:46:49.311492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.009 [2024-12-06 23:46:49.311545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.009 [2024-12-06 23:46:49.311557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:38.009 23:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75651 00:12:38.268 [2024-12-06 23:46:49.600608] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:39.208 00:12:39.208 real 0m23.156s 00:12:39.208 user 0m28.034s 00:12:39.208 sys 0m3.595s 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.208 ************************************ 00:12:39.208 END TEST raid_rebuild_test_sb 00:12:39.208 ************************************ 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.208 23:46:50 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:39.208 23:46:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:39.208 23:46:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.208 23:46:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.208 ************************************ 00:12:39.208 START TEST raid_rebuild_test_io 00:12:39.208 ************************************ 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76381 00:12:39.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76381 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76381 ']' 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.208 23:46:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.468 [2024-12-06 23:46:50.834785] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:12:39.468 [2024-12-06 23:46:50.834989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:39.469 Zero copy mechanism will not be used. 00:12:39.469 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76381 ] 00:12:39.469 [2024-12-06 23:46:51.006219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.728 [2024-12-06 23:46:51.117728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.988 [2024-12-06 23:46:51.322058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.988 [2024-12-06 23:46:51.322159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.248 BaseBdev1_malloc 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.248 [2024-12-06 23:46:51.701159] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:40.248 [2024-12-06 23:46:51.701221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.248 [2024-12-06 23:46:51.701260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:40.248 [2024-12-06 23:46:51.701273] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.248 [2024-12-06 23:46:51.703292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.248 [2024-12-06 23:46:51.703386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:40.248 BaseBdev1 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.248 BaseBdev2_malloc 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.248 [2024-12-06 23:46:51.753669] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:40.248 [2024-12-06 23:46:51.753738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.248 [2024-12-06 23:46:51.753763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:40.248 [2024-12-06 23:46:51.753773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.248 [2024-12-06 23:46:51.755777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.248 [2024-12-06 23:46:51.755890] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:40.248 BaseBdev2 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:40.248 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.249 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.509 spare_malloc 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.509 spare_delay 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.509 [2024-12-06 23:46:51.853809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:40.509 [2024-12-06 23:46:51.853862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.509 [2024-12-06 23:46:51.853882] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:40.509 [2024-12-06 23:46:51.853892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.509 [2024-12-06 23:46:51.855992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.509 [2024-12-06 23:46:51.856068] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:40.509 spare 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.509 [2024-12-06 23:46:51.865839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.509 [2024-12-06 23:46:51.867535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.509 [2024-12-06 23:46:51.867622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:40.509 [2024-12-06 23:46:51.867635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:40.509 [2024-12-06 23:46:51.867873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:40.509 [2024-12-06 23:46:51.868017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:40.509 [2024-12-06 23:46:51.868028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:40.509 [2024-12-06 23:46:51.868175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.509 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.509 "name": "raid_bdev1", 00:12:40.509 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:40.509 "strip_size_kb": 0, 00:12:40.509 "state": "online", 00:12:40.509 "raid_level": "raid1", 00:12:40.509 "superblock": false, 00:12:40.509 "num_base_bdevs": 2, 00:12:40.509 "num_base_bdevs_discovered": 2, 00:12:40.509 "num_base_bdevs_operational": 2, 00:12:40.509 "base_bdevs_list": [ 00:12:40.509 { 00:12:40.509 "name": "BaseBdev1", 00:12:40.509 "uuid": "e66cdeb0-9854-5fad-8a65-7165b13b8fcb", 00:12:40.509 "is_configured": true, 00:12:40.509 "data_offset": 0, 00:12:40.509 "data_size": 65536 00:12:40.509 }, 00:12:40.509 { 00:12:40.509 "name": "BaseBdev2", 00:12:40.509 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:40.509 "is_configured": true, 00:12:40.509 "data_offset": 0, 00:12:40.509 "data_size": 65536 00:12:40.509 } 00:12:40.509 ] 00:12:40.510 }' 00:12:40.510 23:46:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.510 23:46:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.769 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:40.769 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:40.769 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.769 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.030 [2024-12-06 23:46:52.333339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.030 [2024-12-06 23:46:52.420913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.030 "name": "raid_bdev1", 00:12:41.030 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:41.030 "strip_size_kb": 0, 00:12:41.030 "state": "online", 00:12:41.030 "raid_level": "raid1", 00:12:41.030 "superblock": false, 00:12:41.030 "num_base_bdevs": 2, 00:12:41.030 "num_base_bdevs_discovered": 1, 00:12:41.030 "num_base_bdevs_operational": 1, 00:12:41.030 "base_bdevs_list": [ 00:12:41.030 { 00:12:41.030 "name": null, 00:12:41.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.030 "is_configured": false, 00:12:41.030 "data_offset": 0, 00:12:41.030 "data_size": 65536 00:12:41.030 }, 00:12:41.030 { 00:12:41.030 "name": "BaseBdev2", 00:12:41.030 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:41.030 "is_configured": true, 00:12:41.030 "data_offset": 0, 00:12:41.030 "data_size": 65536 00:12:41.030 } 00:12:41.030 ] 00:12:41.030 }' 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.030 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.030 [2024-12-06 23:46:52.512610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:41.030 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:41.030 Zero copy mechanism will not be used. 00:12:41.030 Running I/O for 60 seconds... 00:12:41.602 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:41.602 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.602 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.602 [2024-12-06 23:46:52.865074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:41.602 23:46:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.602 23:46:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:41.602 [2024-12-06 23:46:52.898919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:41.602 [2024-12-06 23:46:52.900867] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:41.602 [2024-12-06 23:46:53.013462] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:41.602 [2024-12-06 23:46:53.014064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:41.862 [2024-12-06 23:46:53.227183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:41.862 [2024-12-06 23:46:53.227582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:42.122 192.00 IOPS, 576.00 MiB/s [2024-12-06T23:46:53.685Z] [2024-12-06 23:46:53.544859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:42.122 [2024-12-06 23:46:53.545439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:42.382 [2024-12-06 23:46:53.751577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:42.382 [2024-12-06 23:46:53.751939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:42.382 23:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.382 23:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.382 23:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.382 23:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.382 23:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.382 23:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.382 23:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.382 23:46:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.382 23:46:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.382 23:46:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.643 23:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.643 "name": "raid_bdev1", 00:12:42.643 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:42.643 "strip_size_kb": 0, 00:12:42.643 "state": "online", 00:12:42.643 "raid_level": "raid1", 00:12:42.643 "superblock": false, 00:12:42.643 "num_base_bdevs": 2, 00:12:42.643 "num_base_bdevs_discovered": 2, 00:12:42.643 "num_base_bdevs_operational": 2, 00:12:42.643 "process": { 00:12:42.643 "type": "rebuild", 00:12:42.643 "target": "spare", 00:12:42.643 "progress": { 00:12:42.643 "blocks": 10240, 00:12:42.643 "percent": 15 00:12:42.643 } 00:12:42.643 }, 00:12:42.643 "base_bdevs_list": [ 00:12:42.643 { 00:12:42.643 "name": "spare", 00:12:42.643 "uuid": "b57d9028-9482-58fa-86fe-a0986ee276b3", 00:12:42.643 "is_configured": true, 00:12:42.643 "data_offset": 0, 00:12:42.643 "data_size": 65536 00:12:42.643 }, 00:12:42.643 { 00:12:42.643 "name": "BaseBdev2", 00:12:42.643 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:42.643 "is_configured": true, 00:12:42.643 "data_offset": 0, 00:12:42.643 "data_size": 65536 00:12:42.643 } 00:12:42.643 ] 00:12:42.643 }' 00:12:42.643 23:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.643 23:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.643 23:46:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.643 [2024-12-06 23:46:54.049289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:42.643 [2024-12-06 23:46:54.068850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:42.643 [2024-12-06 23:46:54.069282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:42.643 [2024-12-06 23:46:54.075407] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:42.643 [2024-12-06 23:46:54.082770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.643 [2024-12-06 23:46:54.082838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:42.643 [2024-12-06 23:46:54.082867] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:42.643 [2024-12-06 23:46:54.128846] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.643 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.643 "name": "raid_bdev1", 00:12:42.643 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:42.643 "strip_size_kb": 0, 00:12:42.643 "state": "online", 00:12:42.643 "raid_level": "raid1", 00:12:42.643 "superblock": false, 00:12:42.643 "num_base_bdevs": 2, 00:12:42.643 "num_base_bdevs_discovered": 1, 00:12:42.643 "num_base_bdevs_operational": 1, 00:12:42.643 "base_bdevs_list": [ 00:12:42.643 { 00:12:42.643 "name": null, 00:12:42.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.643 "is_configured": false, 00:12:42.643 "data_offset": 0, 00:12:42.643 "data_size": 65536 00:12:42.644 }, 00:12:42.644 { 00:12:42.644 "name": "BaseBdev2", 00:12:42.644 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:42.644 "is_configured": true, 00:12:42.644 "data_offset": 0, 00:12:42.644 "data_size": 65536 00:12:42.644 } 00:12:42.644 ] 00:12:42.644 }' 00:12:42.644 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.644 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.214 174.50 IOPS, 523.50 MiB/s [2024-12-06T23:46:54.777Z] 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.214 "name": "raid_bdev1", 00:12:43.214 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:43.214 "strip_size_kb": 0, 00:12:43.214 "state": "online", 00:12:43.214 "raid_level": "raid1", 00:12:43.214 "superblock": false, 00:12:43.214 "num_base_bdevs": 2, 00:12:43.214 "num_base_bdevs_discovered": 1, 00:12:43.214 "num_base_bdevs_operational": 1, 00:12:43.214 "base_bdevs_list": [ 00:12:43.214 { 00:12:43.214 "name": null, 00:12:43.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.214 "is_configured": false, 00:12:43.214 "data_offset": 0, 00:12:43.214 "data_size": 65536 00:12:43.214 }, 00:12:43.214 { 00:12:43.214 "name": "BaseBdev2", 00:12:43.214 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:43.214 "is_configured": true, 00:12:43.214 "data_offset": 0, 00:12:43.214 "data_size": 65536 00:12:43.214 } 00:12:43.214 ] 00:12:43.214 }' 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.214 [2024-12-06 23:46:54.706147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.214 23:46:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:43.214 [2024-12-06 23:46:54.743428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:43.215 [2024-12-06 23:46:54.745214] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:43.475 [2024-12-06 23:46:54.859621] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:43.475 [2024-12-06 23:46:54.860157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:43.738 [2024-12-06 23:46:55.073788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:43.738 [2024-12-06 23:46:55.074199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.001 [2024-12-06 23:46:55.325149] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:44.001 168.00 IOPS, 504.00 MiB/s [2024-12-06T23:46:55.564Z] [2024-12-06 23:46:55.539534] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:44.001 [2024-12-06 23:46:55.539916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:44.260 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.260 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.260 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.260 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.260 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.260 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.260 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.260 23:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.260 23:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.260 23:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.260 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.260 "name": "raid_bdev1", 00:12:44.260 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:44.260 "strip_size_kb": 0, 00:12:44.260 "state": "online", 00:12:44.260 "raid_level": "raid1", 00:12:44.260 "superblock": false, 00:12:44.260 "num_base_bdevs": 2, 00:12:44.260 "num_base_bdevs_discovered": 2, 00:12:44.260 "num_base_bdevs_operational": 2, 00:12:44.260 "process": { 00:12:44.260 "type": "rebuild", 00:12:44.260 "target": "spare", 00:12:44.260 "progress": { 00:12:44.260 "blocks": 10240, 00:12:44.260 "percent": 15 00:12:44.260 } 00:12:44.260 }, 00:12:44.260 "base_bdevs_list": [ 00:12:44.260 { 00:12:44.260 "name": "spare", 00:12:44.260 "uuid": "b57d9028-9482-58fa-86fe-a0986ee276b3", 00:12:44.260 "is_configured": true, 00:12:44.260 "data_offset": 0, 00:12:44.260 "data_size": 65536 00:12:44.260 }, 00:12:44.260 { 00:12:44.260 "name": "BaseBdev2", 00:12:44.260 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:44.260 "is_configured": true, 00:12:44.260 "data_offset": 0, 00:12:44.260 "data_size": 65536 00:12:44.260 } 00:12:44.260 ] 00:12:44.260 }' 00:12:44.260 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.519 [2024-12-06 23:46:55.863709] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=404 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.519 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.519 "name": "raid_bdev1", 00:12:44.519 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:44.519 "strip_size_kb": 0, 00:12:44.519 "state": "online", 00:12:44.519 "raid_level": "raid1", 00:12:44.519 "superblock": false, 00:12:44.519 "num_base_bdevs": 2, 00:12:44.519 "num_base_bdevs_discovered": 2, 00:12:44.519 "num_base_bdevs_operational": 2, 00:12:44.519 "process": { 00:12:44.519 "type": "rebuild", 00:12:44.519 "target": "spare", 00:12:44.519 "progress": { 00:12:44.519 "blocks": 14336, 00:12:44.519 "percent": 21 00:12:44.519 } 00:12:44.519 }, 00:12:44.520 "base_bdevs_list": [ 00:12:44.520 { 00:12:44.520 "name": "spare", 00:12:44.520 "uuid": "b57d9028-9482-58fa-86fe-a0986ee276b3", 00:12:44.520 "is_configured": true, 00:12:44.520 "data_offset": 0, 00:12:44.520 "data_size": 65536 00:12:44.520 }, 00:12:44.520 { 00:12:44.520 "name": "BaseBdev2", 00:12:44.520 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:44.520 "is_configured": true, 00:12:44.520 "data_offset": 0, 00:12:44.520 "data_size": 65536 00:12:44.520 } 00:12:44.520 ] 00:12:44.520 }' 00:12:44.520 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.520 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.520 23:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.520 23:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.520 23:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:44.520 [2024-12-06 23:46:56.078152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:45.089 137.25 IOPS, 411.75 MiB/s [2024-12-06T23:46:56.652Z] [2024-12-06 23:46:56.531120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:45.089 [2024-12-06 23:46:56.536684] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:45.348 [2024-12-06 23:46:56.882734] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:45.348 [2024-12-06 23:46:56.883289] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.607 "name": "raid_bdev1", 00:12:45.607 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:45.607 "strip_size_kb": 0, 00:12:45.607 "state": "online", 00:12:45.607 "raid_level": "raid1", 00:12:45.607 "superblock": false, 00:12:45.607 "num_base_bdevs": 2, 00:12:45.607 "num_base_bdevs_discovered": 2, 00:12:45.607 "num_base_bdevs_operational": 2, 00:12:45.607 "process": { 00:12:45.607 "type": "rebuild", 00:12:45.607 "target": "spare", 00:12:45.607 "progress": { 00:12:45.607 "blocks": 26624, 00:12:45.607 "percent": 40 00:12:45.607 } 00:12:45.607 }, 00:12:45.607 "base_bdevs_list": [ 00:12:45.607 { 00:12:45.607 "name": "spare", 00:12:45.607 "uuid": "b57d9028-9482-58fa-86fe-a0986ee276b3", 00:12:45.607 "is_configured": true, 00:12:45.607 "data_offset": 0, 00:12:45.607 "data_size": 65536 00:12:45.607 }, 00:12:45.607 { 00:12:45.607 "name": "BaseBdev2", 00:12:45.607 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:45.607 "is_configured": true, 00:12:45.607 "data_offset": 0, 00:12:45.607 "data_size": 65536 00:12:45.607 } 00:12:45.607 ] 00:12:45.607 }' 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.607 [2024-12-06 23:46:57.086553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.607 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.867 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.867 23:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:45.867 [2024-12-06 23:46:57.307758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:45.867 [2024-12-06 23:46:57.308349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:46.696 117.60 IOPS, 352.80 MiB/s [2024-12-06T23:46:58.259Z] [2024-12-06 23:46:58.167261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.696 "name": "raid_bdev1", 00:12:46.696 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:46.696 "strip_size_kb": 0, 00:12:46.696 "state": "online", 00:12:46.696 "raid_level": "raid1", 00:12:46.696 "superblock": false, 00:12:46.696 "num_base_bdevs": 2, 00:12:46.696 "num_base_bdevs_discovered": 2, 00:12:46.696 "num_base_bdevs_operational": 2, 00:12:46.696 "process": { 00:12:46.696 "type": "rebuild", 00:12:46.696 "target": "spare", 00:12:46.696 "progress": { 00:12:46.696 "blocks": 47104, 00:12:46.696 "percent": 71 00:12:46.696 } 00:12:46.696 }, 00:12:46.696 "base_bdevs_list": [ 00:12:46.696 { 00:12:46.696 "name": "spare", 00:12:46.696 "uuid": "b57d9028-9482-58fa-86fe-a0986ee276b3", 00:12:46.696 "is_configured": true, 00:12:46.696 "data_offset": 0, 00:12:46.696 "data_size": 65536 00:12:46.696 }, 00:12:46.696 { 00:12:46.696 "name": "BaseBdev2", 00:12:46.696 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:46.696 "is_configured": true, 00:12:46.696 "data_offset": 0, 00:12:46.696 "data_size": 65536 00:12:46.696 } 00:12:46.696 ] 00:12:46.696 }' 00:12:46.696 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.956 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.956 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.956 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.956 23:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:46.956 [2024-12-06 23:46:58.494919] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:47.475 103.50 IOPS, 310.50 MiB/s [2024-12-06T23:46:59.038Z] [2024-12-06 23:46:58.930742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:47.734 [2024-12-06 23:46:59.259986] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.994 "name": "raid_bdev1", 00:12:47.994 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:47.994 "strip_size_kb": 0, 00:12:47.994 "state": "online", 00:12:47.994 "raid_level": "raid1", 00:12:47.994 "superblock": false, 00:12:47.994 "num_base_bdevs": 2, 00:12:47.994 "num_base_bdevs_discovered": 2, 00:12:47.994 "num_base_bdevs_operational": 2, 00:12:47.994 "process": { 00:12:47.994 "type": "rebuild", 00:12:47.994 "target": "spare", 00:12:47.994 "progress": { 00:12:47.994 "blocks": 65536, 00:12:47.994 "percent": 100 00:12:47.994 } 00:12:47.994 }, 00:12:47.994 "base_bdevs_list": [ 00:12:47.994 { 00:12:47.994 "name": "spare", 00:12:47.994 "uuid": "b57d9028-9482-58fa-86fe-a0986ee276b3", 00:12:47.994 "is_configured": true, 00:12:47.994 "data_offset": 0, 00:12:47.994 "data_size": 65536 00:12:47.994 }, 00:12:47.994 { 00:12:47.994 "name": "BaseBdev2", 00:12:47.994 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:47.994 "is_configured": true, 00:12:47.994 "data_offset": 0, 00:12:47.994 "data_size": 65536 00:12:47.994 } 00:12:47.994 ] 00:12:47.994 }' 00:12:47.994 [2024-12-06 23:46:59.366087] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.994 [2024-12-06 23:46:59.369075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.994 23:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.934 93.86 IOPS, 281.57 MiB/s [2024-12-06T23:47:00.497Z] 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.934 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.934 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.934 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.934 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.934 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.934 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.934 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.934 23:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.934 23:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.193 23:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.194 88.25 IOPS, 264.75 MiB/s [2024-12-06T23:47:00.757Z] 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.194 "name": "raid_bdev1", 00:12:49.194 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:49.194 "strip_size_kb": 0, 00:12:49.194 "state": "online", 00:12:49.194 "raid_level": "raid1", 00:12:49.194 "superblock": false, 00:12:49.194 "num_base_bdevs": 2, 00:12:49.194 "num_base_bdevs_discovered": 2, 00:12:49.194 "num_base_bdevs_operational": 2, 00:12:49.194 "base_bdevs_list": [ 00:12:49.194 { 00:12:49.194 "name": "spare", 00:12:49.194 "uuid": "b57d9028-9482-58fa-86fe-a0986ee276b3", 00:12:49.194 "is_configured": true, 00:12:49.194 "data_offset": 0, 00:12:49.194 "data_size": 65536 00:12:49.194 }, 00:12:49.194 { 00:12:49.194 "name": "BaseBdev2", 00:12:49.194 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:49.194 "is_configured": true, 00:12:49.194 "data_offset": 0, 00:12:49.194 "data_size": 65536 00:12:49.194 } 00:12:49.194 ] 00:12:49.194 }' 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.194 "name": "raid_bdev1", 00:12:49.194 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:49.194 "strip_size_kb": 0, 00:12:49.194 "state": "online", 00:12:49.194 "raid_level": "raid1", 00:12:49.194 "superblock": false, 00:12:49.194 "num_base_bdevs": 2, 00:12:49.194 "num_base_bdevs_discovered": 2, 00:12:49.194 "num_base_bdevs_operational": 2, 00:12:49.194 "base_bdevs_list": [ 00:12:49.194 { 00:12:49.194 "name": "spare", 00:12:49.194 "uuid": "b57d9028-9482-58fa-86fe-a0986ee276b3", 00:12:49.194 "is_configured": true, 00:12:49.194 "data_offset": 0, 00:12:49.194 "data_size": 65536 00:12:49.194 }, 00:12:49.194 { 00:12:49.194 "name": "BaseBdev2", 00:12:49.194 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:49.194 "is_configured": true, 00:12:49.194 "data_offset": 0, 00:12:49.194 "data_size": 65536 00:12:49.194 } 00:12:49.194 ] 00:12:49.194 }' 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:49.194 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.455 "name": "raid_bdev1", 00:12:49.455 "uuid": "0dd048e0-0c43-4863-9347-2de61033c399", 00:12:49.455 "strip_size_kb": 0, 00:12:49.455 "state": "online", 00:12:49.455 "raid_level": "raid1", 00:12:49.455 "superblock": false, 00:12:49.455 "num_base_bdevs": 2, 00:12:49.455 "num_base_bdevs_discovered": 2, 00:12:49.455 "num_base_bdevs_operational": 2, 00:12:49.455 "base_bdevs_list": [ 00:12:49.455 { 00:12:49.455 "name": "spare", 00:12:49.455 "uuid": "b57d9028-9482-58fa-86fe-a0986ee276b3", 00:12:49.455 "is_configured": true, 00:12:49.455 "data_offset": 0, 00:12:49.455 "data_size": 65536 00:12:49.455 }, 00:12:49.455 { 00:12:49.455 "name": "BaseBdev2", 00:12:49.455 "uuid": "fec723e3-7bed-5f6e-9cde-7bf62ce57cae", 00:12:49.455 "is_configured": true, 00:12:49.455 "data_offset": 0, 00:12:49.455 "data_size": 65536 00:12:49.455 } 00:12:49.455 ] 00:12:49.455 }' 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.455 23:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.715 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:49.715 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.715 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.715 [2024-12-06 23:47:01.222242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:49.715 [2024-12-06 23:47:01.222338] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.974 00:12:49.974 Latency(us) 00:12:49.974 [2024-12-06T23:47:01.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.974 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:49.974 raid_bdev1 : 8.80 82.95 248.85 0.00 0.00 16792.91 296.92 109436.53 00:12:49.974 [2024-12-06T23:47:01.537Z] =================================================================================================================== 00:12:49.974 [2024-12-06T23:47:01.537Z] Total : 82.95 248.85 0.00 0.00 16792.91 296.92 109436.53 00:12:49.974 [2024-12-06 23:47:01.320289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.974 [2024-12-06 23:47:01.320393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.974 [2024-12-06 23:47:01.320485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.974 [2024-12-06 23:47:01.320538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:49.974 { 00:12:49.974 "results": [ 00:12:49.974 { 00:12:49.974 "job": "raid_bdev1", 00:12:49.974 "core_mask": "0x1", 00:12:49.974 "workload": "randrw", 00:12:49.974 "percentage": 50, 00:12:49.974 "status": "finished", 00:12:49.974 "queue_depth": 2, 00:12:49.974 "io_size": 3145728, 00:12:49.974 "runtime": 8.80034, 00:12:49.974 "iops": 82.95134051638914, 00:12:49.974 "mibps": 248.8540215491674, 00:12:49.974 "io_failed": 0, 00:12:49.974 "io_timeout": 0, 00:12:49.974 "avg_latency_us": 16792.905368188072, 00:12:49.974 "min_latency_us": 296.91528384279474, 00:12:49.974 "max_latency_us": 109436.5344978166 00:12:49.974 } 00:12:49.974 ], 00:12:49.974 "core_count": 1 00:12:49.974 } 00:12:49.974 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.974 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:49.975 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:50.234 /dev/nbd0 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.234 1+0 records in 00:12:50.234 1+0 records out 00:12:50.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527712 s, 7.8 MB/s 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:50.234 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:50.235 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:50.235 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:50.235 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:50.235 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:50.235 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.235 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:50.523 /dev/nbd1 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.523 1+0 records in 00:12:50.523 1+0 records out 00:12:50.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415465 s, 9.9 MB/s 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.523 23:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:50.523 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:50.523 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.523 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:50.523 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.523 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:50.523 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.523 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.783 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76381 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76381 ']' 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76381 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76381 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76381' 00:12:51.042 killing process with pid 76381 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76381 00:12:51.042 Received shutdown signal, test time was about 10.032071 seconds 00:12:51.042 00:12:51.042 Latency(us) 00:12:51.042 [2024-12-06T23:47:02.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.042 [2024-12-06T23:47:02.605Z] =================================================================================================================== 00:12:51.042 [2024-12-06T23:47:02.605Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:51.042 [2024-12-06 23:47:02.527462] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.042 23:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76381 00:12:51.302 [2024-12-06 23:47:02.752912] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.680 23:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:52.680 00:12:52.680 real 0m13.151s 00:12:52.680 user 0m16.409s 00:12:52.680 sys 0m1.479s 00:12:52.680 23:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.680 ************************************ 00:12:52.680 END TEST raid_rebuild_test_io 00:12:52.680 ************************************ 00:12:52.680 23:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.680 23:47:03 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:52.680 23:47:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:52.680 23:47:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.680 23:47:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.680 ************************************ 00:12:52.681 START TEST raid_rebuild_test_sb_io 00:12:52.681 ************************************ 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:52.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76776 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76776 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76776 ']' 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.681 23:47:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.681 [2024-12-06 23:47:04.051489] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:12:52.681 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:52.681 Zero copy mechanism will not be used. 00:12:52.681 [2024-12-06 23:47:04.051714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76776 ] 00:12:52.681 [2024-12-06 23:47:04.222418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.940 [2024-12-06 23:47:04.329140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.199 [2024-12-06 23:47:04.532176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.199 [2024-12-06 23:47:04.532310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.459 BaseBdev1_malloc 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.459 [2024-12-06 23:47:04.916953] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:53.459 [2024-12-06 23:47:04.917054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.459 [2024-12-06 23:47:04.917093] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:53.459 [2024-12-06 23:47:04.917123] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.459 [2024-12-06 23:47:04.919129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.459 [2024-12-06 23:47:04.919218] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:53.459 BaseBdev1 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.459 BaseBdev2_malloc 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.459 [2024-12-06 23:47:04.971860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:53.459 [2024-12-06 23:47:04.971957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.459 [2024-12-06 23:47:04.972012] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:53.459 [2024-12-06 23:47:04.972043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.459 [2024-12-06 23:47:04.974035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.459 [2024-12-06 23:47:04.974104] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:53.459 BaseBdev2 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.459 23:47:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.717 spare_malloc 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.717 spare_delay 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.717 [2024-12-06 23:47:05.065056] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:53.717 [2024-12-06 23:47:05.065111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.717 [2024-12-06 23:47:05.065145] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:53.717 [2024-12-06 23:47:05.065155] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.717 [2024-12-06 23:47:05.067173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.717 [2024-12-06 23:47:05.067286] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:53.717 spare 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.717 [2024-12-06 23:47:05.077082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.717 [2024-12-06 23:47:05.078812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.717 [2024-12-06 23:47:05.078973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:53.717 [2024-12-06 23:47:05.078987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:53.717 [2024-12-06 23:47:05.079210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:53.717 [2024-12-06 23:47:05.079373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:53.717 [2024-12-06 23:47:05.079382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:53.717 [2024-12-06 23:47:05.079530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.717 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.717 "name": "raid_bdev1", 00:12:53.717 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:12:53.718 "strip_size_kb": 0, 00:12:53.718 "state": "online", 00:12:53.718 "raid_level": "raid1", 00:12:53.718 "superblock": true, 00:12:53.718 "num_base_bdevs": 2, 00:12:53.718 "num_base_bdevs_discovered": 2, 00:12:53.718 "num_base_bdevs_operational": 2, 00:12:53.718 "base_bdevs_list": [ 00:12:53.718 { 00:12:53.718 "name": "BaseBdev1", 00:12:53.718 "uuid": "22d644e6-b58e-5f2e-96ca-055373713e24", 00:12:53.718 "is_configured": true, 00:12:53.718 "data_offset": 2048, 00:12:53.718 "data_size": 63488 00:12:53.718 }, 00:12:53.718 { 00:12:53.718 "name": "BaseBdev2", 00:12:53.718 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:12:53.718 "is_configured": true, 00:12:53.718 "data_offset": 2048, 00:12:53.718 "data_size": 63488 00:12:53.718 } 00:12:53.718 ] 00:12:53.718 }' 00:12:53.718 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.718 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.976 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:53.976 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:53.976 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.976 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.976 [2024-12-06 23:47:05.512641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.976 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.234 [2024-12-06 23:47:05.588207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.234 "name": "raid_bdev1", 00:12:54.234 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:12:54.234 "strip_size_kb": 0, 00:12:54.234 "state": "online", 00:12:54.234 "raid_level": "raid1", 00:12:54.234 "superblock": true, 00:12:54.234 "num_base_bdevs": 2, 00:12:54.234 "num_base_bdevs_discovered": 1, 00:12:54.234 "num_base_bdevs_operational": 1, 00:12:54.234 "base_bdevs_list": [ 00:12:54.234 { 00:12:54.234 "name": null, 00:12:54.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.234 "is_configured": false, 00:12:54.234 "data_offset": 0, 00:12:54.234 "data_size": 63488 00:12:54.234 }, 00:12:54.234 { 00:12:54.234 "name": "BaseBdev2", 00:12:54.234 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:12:54.234 "is_configured": true, 00:12:54.234 "data_offset": 2048, 00:12:54.234 "data_size": 63488 00:12:54.234 } 00:12:54.234 ] 00:12:54.234 }' 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.234 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.234 [2024-12-06 23:47:05.696648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:54.234 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:54.234 Zero copy mechanism will not be used. 00:12:54.234 Running I/O for 60 seconds... 00:12:54.493 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:54.493 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.493 23:47:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.493 [2024-12-06 23:47:05.983790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:54.493 23:47:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.493 23:47:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:54.493 [2024-12-06 23:47:06.040485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:54.493 [2024-12-06 23:47:06.042376] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:54.751 [2024-12-06 23:47:06.150269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:54.751 [2024-12-06 23:47:06.150822] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:55.010 [2024-12-06 23:47:06.369479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:55.010 [2024-12-06 23:47:06.369916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:55.268 [2024-12-06 23:47:06.695866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:55.527 252.00 IOPS, 756.00 MiB/s [2024-12-06T23:47:07.090Z] 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.527 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.528 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.528 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.528 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.528 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.528 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.528 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.528 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.528 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.528 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.528 "name": "raid_bdev1", 00:12:55.528 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:12:55.528 "strip_size_kb": 0, 00:12:55.528 "state": "online", 00:12:55.528 "raid_level": "raid1", 00:12:55.528 "superblock": true, 00:12:55.528 "num_base_bdevs": 2, 00:12:55.528 "num_base_bdevs_discovered": 2, 00:12:55.528 "num_base_bdevs_operational": 2, 00:12:55.528 "process": { 00:12:55.528 "type": "rebuild", 00:12:55.528 "target": "spare", 00:12:55.528 "progress": { 00:12:55.528 "blocks": 12288, 00:12:55.528 "percent": 19 00:12:55.528 } 00:12:55.528 }, 00:12:55.528 "base_bdevs_list": [ 00:12:55.528 { 00:12:55.528 "name": "spare", 00:12:55.528 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:12:55.528 "is_configured": true, 00:12:55.528 "data_offset": 2048, 00:12:55.528 "data_size": 63488 00:12:55.528 }, 00:12:55.528 { 00:12:55.528 "name": "BaseBdev2", 00:12:55.528 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:12:55.528 "is_configured": true, 00:12:55.528 "data_offset": 2048, 00:12:55.528 "data_size": 63488 00:12:55.528 } 00:12:55.528 ] 00:12:55.528 }' 00:12:55.528 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.788 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.788 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.788 [2024-12-06 23:47:07.174205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:55.788 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.788 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:55.788 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.788 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.788 [2024-12-06 23:47:07.182295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.788 [2024-12-06 23:47:07.279205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:55.788 [2024-12-06 23:47:07.285410] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:55.788 [2024-12-06 23:47:07.297822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.788 [2024-12-06 23:47:07.297891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.788 [2024-12-06 23:47:07.297917] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:55.788 [2024-12-06 23:47:07.337788] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.049 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.049 "name": "raid_bdev1", 00:12:56.049 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:12:56.049 "strip_size_kb": 0, 00:12:56.049 "state": "online", 00:12:56.049 "raid_level": "raid1", 00:12:56.049 "superblock": true, 00:12:56.049 "num_base_bdevs": 2, 00:12:56.049 "num_base_bdevs_discovered": 1, 00:12:56.049 "num_base_bdevs_operational": 1, 00:12:56.050 "base_bdevs_list": [ 00:12:56.050 { 00:12:56.050 "name": null, 00:12:56.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.050 "is_configured": false, 00:12:56.050 "data_offset": 0, 00:12:56.050 "data_size": 63488 00:12:56.050 }, 00:12:56.050 { 00:12:56.050 "name": "BaseBdev2", 00:12:56.050 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:12:56.050 "is_configured": true, 00:12:56.050 "data_offset": 2048, 00:12:56.050 "data_size": 63488 00:12:56.050 } 00:12:56.050 ] 00:12:56.050 }' 00:12:56.050 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.050 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.310 197.50 IOPS, 592.50 MiB/s [2024-12-06T23:47:07.873Z] 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.310 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.310 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.310 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.310 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.311 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.311 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.311 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.311 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.311 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.311 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.311 "name": "raid_bdev1", 00:12:56.311 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:12:56.311 "strip_size_kb": 0, 00:12:56.311 "state": "online", 00:12:56.311 "raid_level": "raid1", 00:12:56.311 "superblock": true, 00:12:56.311 "num_base_bdevs": 2, 00:12:56.311 "num_base_bdevs_discovered": 1, 00:12:56.311 "num_base_bdevs_operational": 1, 00:12:56.311 "base_bdevs_list": [ 00:12:56.311 { 00:12:56.311 "name": null, 00:12:56.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.311 "is_configured": false, 00:12:56.311 "data_offset": 0, 00:12:56.311 "data_size": 63488 00:12:56.311 }, 00:12:56.311 { 00:12:56.311 "name": "BaseBdev2", 00:12:56.311 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:12:56.311 "is_configured": true, 00:12:56.311 "data_offset": 2048, 00:12:56.311 "data_size": 63488 00:12:56.311 } 00:12:56.311 ] 00:12:56.311 }' 00:12:56.311 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.571 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.571 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.571 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.571 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:56.571 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.571 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.571 [2024-12-06 23:47:07.931037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:56.571 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.571 23:47:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:56.571 [2024-12-06 23:47:07.990510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:56.571 [2024-12-06 23:47:07.992436] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:56.571 [2024-12-06 23:47:08.099991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:56.571 [2024-12-06 23:47:08.100513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:56.830 [2024-12-06 23:47:08.314812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:56.830 [2024-12-06 23:47:08.315135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:57.090 [2024-12-06 23:47:08.644791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:57.350 179.00 IOPS, 537.00 MiB/s [2024-12-06T23:47:08.913Z] [2024-12-06 23:47:08.770062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:57.610 23:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.610 23:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.610 23:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.610 23:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.610 23:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.610 23:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.610 23:47:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.610 23:47:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.610 23:47:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.610 23:47:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.610 "name": "raid_bdev1", 00:12:57.610 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:12:57.610 "strip_size_kb": 0, 00:12:57.610 "state": "online", 00:12:57.610 "raid_level": "raid1", 00:12:57.610 "superblock": true, 00:12:57.610 "num_base_bdevs": 2, 00:12:57.610 "num_base_bdevs_discovered": 2, 00:12:57.610 "num_base_bdevs_operational": 2, 00:12:57.610 "process": { 00:12:57.610 "type": "rebuild", 00:12:57.610 "target": "spare", 00:12:57.610 "progress": { 00:12:57.610 "blocks": 12288, 00:12:57.610 "percent": 19 00:12:57.610 } 00:12:57.610 }, 00:12:57.610 "base_bdevs_list": [ 00:12:57.610 { 00:12:57.610 "name": "spare", 00:12:57.610 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:12:57.610 "is_configured": true, 00:12:57.610 "data_offset": 2048, 00:12:57.610 "data_size": 63488 00:12:57.610 }, 00:12:57.610 { 00:12:57.610 "name": "BaseBdev2", 00:12:57.610 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:12:57.610 "is_configured": true, 00:12:57.610 "data_offset": 2048, 00:12:57.610 "data_size": 63488 00:12:57.610 } 00:12:57.610 ] 00:12:57.610 }' 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:57.610 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=418 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.610 [2024-12-06 23:47:09.123901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:57.610 [2024-12-06 23:47:09.124210] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.610 "name": "raid_bdev1", 00:12:57.610 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:12:57.610 "strip_size_kb": 0, 00:12:57.610 "state": "online", 00:12:57.610 "raid_level": "raid1", 00:12:57.610 "superblock": true, 00:12:57.610 "num_base_bdevs": 2, 00:12:57.610 "num_base_bdevs_discovered": 2, 00:12:57.610 "num_base_bdevs_operational": 2, 00:12:57.610 "process": { 00:12:57.610 "type": "rebuild", 00:12:57.610 "target": "spare", 00:12:57.610 "progress": { 00:12:57.610 "blocks": 14336, 00:12:57.610 "percent": 22 00:12:57.610 } 00:12:57.610 }, 00:12:57.610 "base_bdevs_list": [ 00:12:57.610 { 00:12:57.610 "name": "spare", 00:12:57.610 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:12:57.610 "is_configured": true, 00:12:57.610 "data_offset": 2048, 00:12:57.610 "data_size": 63488 00:12:57.610 }, 00:12:57.610 { 00:12:57.610 "name": "BaseBdev2", 00:12:57.610 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:12:57.610 "is_configured": true, 00:12:57.610 "data_offset": 2048, 00:12:57.610 "data_size": 63488 00:12:57.610 } 00:12:57.610 ] 00:12:57.610 }' 00:12:57.610 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.871 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.871 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.871 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.871 23:47:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:58.131 [2024-12-06 23:47:09.463378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:58.131 [2024-12-06 23:47:09.692199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:58.649 153.25 IOPS, 459.75 MiB/s [2024-12-06T23:47:10.212Z] [2024-12-06 23:47:10.024228] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.909 "name": "raid_bdev1", 00:12:58.909 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:12:58.909 "strip_size_kb": 0, 00:12:58.909 "state": "online", 00:12:58.909 "raid_level": "raid1", 00:12:58.909 "superblock": true, 00:12:58.909 "num_base_bdevs": 2, 00:12:58.909 "num_base_bdevs_discovered": 2, 00:12:58.909 "num_base_bdevs_operational": 2, 00:12:58.909 "process": { 00:12:58.909 "type": "rebuild", 00:12:58.909 "target": "spare", 00:12:58.909 "progress": { 00:12:58.909 "blocks": 28672, 00:12:58.909 "percent": 45 00:12:58.909 } 00:12:58.909 }, 00:12:58.909 "base_bdevs_list": [ 00:12:58.909 { 00:12:58.909 "name": "spare", 00:12:58.909 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:12:58.909 "is_configured": true, 00:12:58.909 "data_offset": 2048, 00:12:58.909 "data_size": 63488 00:12:58.909 }, 00:12:58.909 { 00:12:58.909 "name": "BaseBdev2", 00:12:58.909 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:12:58.909 "is_configured": true, 00:12:58.909 "data_offset": 2048, 00:12:58.909 "data_size": 63488 00:12:58.909 } 00:12:58.909 ] 00:12:58.909 }' 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.909 23:47:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:59.169 [2024-12-06 23:47:10.587643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:59.429 131.20 IOPS, 393.60 MiB/s [2024-12-06T23:47:10.992Z] [2024-12-06 23:47:10.925457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:59.688 [2024-12-06 23:47:11.143366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.948 "name": "raid_bdev1", 00:12:59.948 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:12:59.948 "strip_size_kb": 0, 00:12:59.948 "state": "online", 00:12:59.948 "raid_level": "raid1", 00:12:59.948 "superblock": true, 00:12:59.948 "num_base_bdevs": 2, 00:12:59.948 "num_base_bdevs_discovered": 2, 00:12:59.948 "num_base_bdevs_operational": 2, 00:12:59.948 "process": { 00:12:59.948 "type": "rebuild", 00:12:59.948 "target": "spare", 00:12:59.948 "progress": { 00:12:59.948 "blocks": 43008, 00:12:59.948 "percent": 67 00:12:59.948 } 00:12:59.948 }, 00:12:59.948 "base_bdevs_list": [ 00:12:59.948 { 00:12:59.948 "name": "spare", 00:12:59.948 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:12:59.948 "is_configured": true, 00:12:59.948 "data_offset": 2048, 00:12:59.948 "data_size": 63488 00:12:59.948 }, 00:12:59.948 { 00:12:59.948 "name": "BaseBdev2", 00:12:59.948 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:12:59.948 "is_configured": true, 00:12:59.948 "data_offset": 2048, 00:12:59.948 "data_size": 63488 00:12:59.948 } 00:12:59.948 ] 00:12:59.948 }' 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.948 [2024-12-06 23:47:11.475172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.948 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.208 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.208 23:47:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:00.208 [2024-12-06 23:47:11.686001] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:00.208 [2024-12-06 23:47:11.686258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:00.468 113.67 IOPS, 341.00 MiB/s [2024-12-06T23:47:12.031Z] [2024-12-06 23:47:12.013928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.038 "name": "raid_bdev1", 00:13:01.038 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:01.038 "strip_size_kb": 0, 00:13:01.038 "state": "online", 00:13:01.038 "raid_level": "raid1", 00:13:01.038 "superblock": true, 00:13:01.038 "num_base_bdevs": 2, 00:13:01.038 "num_base_bdevs_discovered": 2, 00:13:01.038 "num_base_bdevs_operational": 2, 00:13:01.038 "process": { 00:13:01.038 "type": "rebuild", 00:13:01.038 "target": "spare", 00:13:01.038 "progress": { 00:13:01.038 "blocks": 59392, 00:13:01.038 "percent": 93 00:13:01.038 } 00:13:01.038 }, 00:13:01.038 "base_bdevs_list": [ 00:13:01.038 { 00:13:01.038 "name": "spare", 00:13:01.038 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:13:01.038 "is_configured": true, 00:13:01.038 "data_offset": 2048, 00:13:01.038 "data_size": 63488 00:13:01.038 }, 00:13:01.038 { 00:13:01.038 "name": "BaseBdev2", 00:13:01.038 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:01.038 "is_configured": true, 00:13:01.038 "data_offset": 2048, 00:13:01.038 "data_size": 63488 00:13:01.038 } 00:13:01.038 ] 00:13:01.038 }' 00:13:01.038 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.299 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.299 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.299 [2024-12-06 23:47:12.679693] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:01.299 101.57 IOPS, 304.71 MiB/s [2024-12-06T23:47:12.862Z] 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.299 23:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:01.299 [2024-12-06 23:47:12.779482] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:01.299 [2024-12-06 23:47:12.781721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.239 93.25 IOPS, 279.75 MiB/s [2024-12-06T23:47:13.802Z] 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.239 "name": "raid_bdev1", 00:13:02.239 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:02.239 "strip_size_kb": 0, 00:13:02.239 "state": "online", 00:13:02.239 "raid_level": "raid1", 00:13:02.239 "superblock": true, 00:13:02.239 "num_base_bdevs": 2, 00:13:02.239 "num_base_bdevs_discovered": 2, 00:13:02.239 "num_base_bdevs_operational": 2, 00:13:02.239 "base_bdevs_list": [ 00:13:02.239 { 00:13:02.239 "name": "spare", 00:13:02.239 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:13:02.239 "is_configured": true, 00:13:02.239 "data_offset": 2048, 00:13:02.239 "data_size": 63488 00:13:02.239 }, 00:13:02.239 { 00:13:02.239 "name": "BaseBdev2", 00:13:02.239 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:02.239 "is_configured": true, 00:13:02.239 "data_offset": 2048, 00:13:02.239 "data_size": 63488 00:13:02.239 } 00:13:02.239 ] 00:13:02.239 }' 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:02.239 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.499 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.499 "name": "raid_bdev1", 00:13:02.499 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:02.499 "strip_size_kb": 0, 00:13:02.499 "state": "online", 00:13:02.499 "raid_level": "raid1", 00:13:02.499 "superblock": true, 00:13:02.499 "num_base_bdevs": 2, 00:13:02.500 "num_base_bdevs_discovered": 2, 00:13:02.500 "num_base_bdevs_operational": 2, 00:13:02.500 "base_bdevs_list": [ 00:13:02.500 { 00:13:02.500 "name": "spare", 00:13:02.500 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:13:02.500 "is_configured": true, 00:13:02.500 "data_offset": 2048, 00:13:02.500 "data_size": 63488 00:13:02.500 }, 00:13:02.500 { 00:13:02.500 "name": "BaseBdev2", 00:13:02.500 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:02.500 "is_configured": true, 00:13:02.500 "data_offset": 2048, 00:13:02.500 "data_size": 63488 00:13:02.500 } 00:13:02.500 ] 00:13:02.500 }' 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.500 23:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.500 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.500 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.500 "name": "raid_bdev1", 00:13:02.500 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:02.500 "strip_size_kb": 0, 00:13:02.500 "state": "online", 00:13:02.500 "raid_level": "raid1", 00:13:02.500 "superblock": true, 00:13:02.500 "num_base_bdevs": 2, 00:13:02.500 "num_base_bdevs_discovered": 2, 00:13:02.500 "num_base_bdevs_operational": 2, 00:13:02.500 "base_bdevs_list": [ 00:13:02.500 { 00:13:02.500 "name": "spare", 00:13:02.500 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:13:02.500 "is_configured": true, 00:13:02.500 "data_offset": 2048, 00:13:02.500 "data_size": 63488 00:13:02.500 }, 00:13:02.500 { 00:13:02.500 "name": "BaseBdev2", 00:13:02.500 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:02.500 "is_configured": true, 00:13:02.500 "data_offset": 2048, 00:13:02.500 "data_size": 63488 00:13:02.500 } 00:13:02.500 ] 00:13:02.500 }' 00:13:02.500 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.500 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.070 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:03.070 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.070 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.070 [2024-12-06 23:47:14.483545] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.070 [2024-12-06 23:47:14.483586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.070 00:13:03.070 Latency(us) 00:13:03.070 [2024-12-06T23:47:14.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.070 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:03.070 raid_bdev1 : 8.81 87.71 263.12 0.00 0.00 16391.48 291.55 114015.47 00:13:03.070 [2024-12-06T23:47:14.633Z] =================================================================================================================== 00:13:03.070 [2024-12-06T23:47:14.633Z] Total : 87.71 263.12 0.00 0.00 16391.48 291.55 114015.47 00:13:03.071 [2024-12-06 23:47:14.516438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.071 [2024-12-06 23:47:14.516503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.071 [2024-12-06 23:47:14.516573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.071 [2024-12-06 23:47:14.516585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:03.071 { 00:13:03.071 "results": [ 00:13:03.071 { 00:13:03.071 "job": "raid_bdev1", 00:13:03.071 "core_mask": "0x1", 00:13:03.071 "workload": "randrw", 00:13:03.071 "percentage": 50, 00:13:03.071 "status": "finished", 00:13:03.071 "queue_depth": 2, 00:13:03.071 "io_size": 3145728, 00:13:03.071 "runtime": 8.813521, 00:13:03.071 "iops": 87.7061505838586, 00:13:03.071 "mibps": 263.1184517515758, 00:13:03.071 "io_failed": 0, 00:13:03.071 "io_timeout": 0, 00:13:03.071 "avg_latency_us": 16391.478531440483, 00:13:03.071 "min_latency_us": 291.54934497816595, 00:13:03.071 "max_latency_us": 114015.46899563319 00:13:03.071 } 00:13:03.071 ], 00:13:03.071 "core_count": 1 00:13:03.071 } 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.071 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:03.331 /dev/nbd0 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.331 1+0 records in 00:13:03.331 1+0 records out 00:13:03.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425092 s, 9.6 MB/s 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.331 23:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:03.591 /dev/nbd1 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.591 1+0 records in 00:13:03.591 1+0 records out 00:13:03.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389571 s, 10.5 MB/s 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.591 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:03.850 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:03.850 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.851 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:03.851 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.851 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:03.851 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.851 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.110 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.371 [2024-12-06 23:47:15.716513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:04.371 [2024-12-06 23:47:15.716579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.371 [2024-12-06 23:47:15.716604] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:04.371 [2024-12-06 23:47:15.716615] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.371 [2024-12-06 23:47:15.718826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.371 [2024-12-06 23:47:15.718866] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:04.371 [2024-12-06 23:47:15.718956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:04.371 [2024-12-06 23:47:15.719020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.371 [2024-12-06 23:47:15.719177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.371 spare 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.371 [2024-12-06 23:47:15.819086] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:04.371 [2024-12-06 23:47:15.819116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:04.371 [2024-12-06 23:47:15.819440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:04.371 [2024-12-06 23:47:15.819629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:04.371 [2024-12-06 23:47:15.819653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:04.371 [2024-12-06 23:47:15.819837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.371 "name": "raid_bdev1", 00:13:04.371 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:04.371 "strip_size_kb": 0, 00:13:04.371 "state": "online", 00:13:04.371 "raid_level": "raid1", 00:13:04.371 "superblock": true, 00:13:04.371 "num_base_bdevs": 2, 00:13:04.371 "num_base_bdevs_discovered": 2, 00:13:04.371 "num_base_bdevs_operational": 2, 00:13:04.371 "base_bdevs_list": [ 00:13:04.371 { 00:13:04.371 "name": "spare", 00:13:04.371 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:13:04.371 "is_configured": true, 00:13:04.371 "data_offset": 2048, 00:13:04.371 "data_size": 63488 00:13:04.371 }, 00:13:04.371 { 00:13:04.371 "name": "BaseBdev2", 00:13:04.371 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:04.371 "is_configured": true, 00:13:04.371 "data_offset": 2048, 00:13:04.371 "data_size": 63488 00:13:04.371 } 00:13:04.371 ] 00:13:04.371 }' 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.371 23:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.951 "name": "raid_bdev1", 00:13:04.951 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:04.951 "strip_size_kb": 0, 00:13:04.951 "state": "online", 00:13:04.951 "raid_level": "raid1", 00:13:04.951 "superblock": true, 00:13:04.951 "num_base_bdevs": 2, 00:13:04.951 "num_base_bdevs_discovered": 2, 00:13:04.951 "num_base_bdevs_operational": 2, 00:13:04.951 "base_bdevs_list": [ 00:13:04.951 { 00:13:04.951 "name": "spare", 00:13:04.951 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:13:04.951 "is_configured": true, 00:13:04.951 "data_offset": 2048, 00:13:04.951 "data_size": 63488 00:13:04.951 }, 00:13:04.951 { 00:13:04.951 "name": "BaseBdev2", 00:13:04.951 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:04.951 "is_configured": true, 00:13:04.951 "data_offset": 2048, 00:13:04.951 "data_size": 63488 00:13:04.951 } 00:13:04.951 ] 00:13:04.951 }' 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.951 [2024-12-06 23:47:16.439412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.951 "name": "raid_bdev1", 00:13:04.951 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:04.951 "strip_size_kb": 0, 00:13:04.951 "state": "online", 00:13:04.951 "raid_level": "raid1", 00:13:04.951 "superblock": true, 00:13:04.951 "num_base_bdevs": 2, 00:13:04.951 "num_base_bdevs_discovered": 1, 00:13:04.951 "num_base_bdevs_operational": 1, 00:13:04.951 "base_bdevs_list": [ 00:13:04.951 { 00:13:04.951 "name": null, 00:13:04.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.951 "is_configured": false, 00:13:04.951 "data_offset": 0, 00:13:04.951 "data_size": 63488 00:13:04.951 }, 00:13:04.951 { 00:13:04.951 "name": "BaseBdev2", 00:13:04.951 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:04.951 "is_configured": true, 00:13:04.951 "data_offset": 2048, 00:13:04.951 "data_size": 63488 00:13:04.951 } 00:13:04.951 ] 00:13:04.951 }' 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.951 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.520 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:05.520 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.520 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.520 [2024-12-06 23:47:16.871441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.520 [2024-12-06 23:47:16.871665] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:05.520 [2024-12-06 23:47:16.871699] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:05.520 [2024-12-06 23:47:16.871745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.520 [2024-12-06 23:47:16.889476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:05.520 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.520 23:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:05.520 [2024-12-06 23:47:16.891460] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.458 "name": "raid_bdev1", 00:13:06.458 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:06.458 "strip_size_kb": 0, 00:13:06.458 "state": "online", 00:13:06.458 "raid_level": "raid1", 00:13:06.458 "superblock": true, 00:13:06.458 "num_base_bdevs": 2, 00:13:06.458 "num_base_bdevs_discovered": 2, 00:13:06.458 "num_base_bdevs_operational": 2, 00:13:06.458 "process": { 00:13:06.458 "type": "rebuild", 00:13:06.458 "target": "spare", 00:13:06.458 "progress": { 00:13:06.458 "blocks": 20480, 00:13:06.458 "percent": 32 00:13:06.458 } 00:13:06.458 }, 00:13:06.458 "base_bdevs_list": [ 00:13:06.458 { 00:13:06.458 "name": "spare", 00:13:06.458 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:13:06.458 "is_configured": true, 00:13:06.458 "data_offset": 2048, 00:13:06.458 "data_size": 63488 00:13:06.458 }, 00:13:06.458 { 00:13:06.458 "name": "BaseBdev2", 00:13:06.458 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:06.458 "is_configured": true, 00:13:06.458 "data_offset": 2048, 00:13:06.458 "data_size": 63488 00:13:06.458 } 00:13:06.458 ] 00:13:06.458 }' 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.458 23:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.717 [2024-12-06 23:47:18.023511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.717 [2024-12-06 23:47:18.096772] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:06.717 [2024-12-06 23:47:18.096828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.717 [2024-12-06 23:47:18.096861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.717 [2024-12-06 23:47:18.096868] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.717 "name": "raid_bdev1", 00:13:06.717 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:06.717 "strip_size_kb": 0, 00:13:06.717 "state": "online", 00:13:06.717 "raid_level": "raid1", 00:13:06.717 "superblock": true, 00:13:06.717 "num_base_bdevs": 2, 00:13:06.717 "num_base_bdevs_discovered": 1, 00:13:06.717 "num_base_bdevs_operational": 1, 00:13:06.717 "base_bdevs_list": [ 00:13:06.717 { 00:13:06.717 "name": null, 00:13:06.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.717 "is_configured": false, 00:13:06.717 "data_offset": 0, 00:13:06.717 "data_size": 63488 00:13:06.717 }, 00:13:06.717 { 00:13:06.717 "name": "BaseBdev2", 00:13:06.717 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:06.717 "is_configured": true, 00:13:06.717 "data_offset": 2048, 00:13:06.717 "data_size": 63488 00:13:06.717 } 00:13:06.717 ] 00:13:06.717 }' 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.717 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.286 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:07.286 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.286 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.286 [2024-12-06 23:47:18.593390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:07.286 [2024-12-06 23:47:18.593474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.286 [2024-12-06 23:47:18.593496] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:07.286 [2024-12-06 23:47:18.593505] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.286 [2024-12-06 23:47:18.593983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.286 [2024-12-06 23:47:18.594010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:07.286 [2024-12-06 23:47:18.594099] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:07.286 [2024-12-06 23:47:18.594116] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:07.286 [2024-12-06 23:47:18.594128] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:07.286 [2024-12-06 23:47:18.594147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.286 [2024-12-06 23:47:18.610307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:07.286 spare 00:13:07.286 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.286 [2024-12-06 23:47:18.612111] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.286 23:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.227 "name": "raid_bdev1", 00:13:08.227 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:08.227 "strip_size_kb": 0, 00:13:08.227 "state": "online", 00:13:08.227 "raid_level": "raid1", 00:13:08.227 "superblock": true, 00:13:08.227 "num_base_bdevs": 2, 00:13:08.227 "num_base_bdevs_discovered": 2, 00:13:08.227 "num_base_bdevs_operational": 2, 00:13:08.227 "process": { 00:13:08.227 "type": "rebuild", 00:13:08.227 "target": "spare", 00:13:08.227 "progress": { 00:13:08.227 "blocks": 20480, 00:13:08.227 "percent": 32 00:13:08.227 } 00:13:08.227 }, 00:13:08.227 "base_bdevs_list": [ 00:13:08.227 { 00:13:08.227 "name": "spare", 00:13:08.227 "uuid": "5b235945-3b16-5707-b9bf-59e20f762e03", 00:13:08.227 "is_configured": true, 00:13:08.227 "data_offset": 2048, 00:13:08.227 "data_size": 63488 00:13:08.227 }, 00:13:08.227 { 00:13:08.227 "name": "BaseBdev2", 00:13:08.227 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:08.227 "is_configured": true, 00:13:08.227 "data_offset": 2048, 00:13:08.227 "data_size": 63488 00:13:08.227 } 00:13:08.227 ] 00:13:08.227 }' 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.227 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.227 [2024-12-06 23:47:19.775862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.488 [2024-12-06 23:47:19.816717] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:08.488 [2024-12-06 23:47:19.816792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.488 [2024-12-06 23:47:19.816806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.488 [2024-12-06 23:47:19.816816] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.488 "name": "raid_bdev1", 00:13:08.488 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:08.488 "strip_size_kb": 0, 00:13:08.488 "state": "online", 00:13:08.488 "raid_level": "raid1", 00:13:08.488 "superblock": true, 00:13:08.488 "num_base_bdevs": 2, 00:13:08.488 "num_base_bdevs_discovered": 1, 00:13:08.488 "num_base_bdevs_operational": 1, 00:13:08.488 "base_bdevs_list": [ 00:13:08.488 { 00:13:08.488 "name": null, 00:13:08.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.488 "is_configured": false, 00:13:08.488 "data_offset": 0, 00:13:08.488 "data_size": 63488 00:13:08.488 }, 00:13:08.488 { 00:13:08.488 "name": "BaseBdev2", 00:13:08.488 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:08.488 "is_configured": true, 00:13:08.488 "data_offset": 2048, 00:13:08.488 "data_size": 63488 00:13:08.488 } 00:13:08.488 ] 00:13:08.488 }' 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.488 23:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.058 "name": "raid_bdev1", 00:13:09.058 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:09.058 "strip_size_kb": 0, 00:13:09.058 "state": "online", 00:13:09.058 "raid_level": "raid1", 00:13:09.058 "superblock": true, 00:13:09.058 "num_base_bdevs": 2, 00:13:09.058 "num_base_bdevs_discovered": 1, 00:13:09.058 "num_base_bdevs_operational": 1, 00:13:09.058 "base_bdevs_list": [ 00:13:09.058 { 00:13:09.058 "name": null, 00:13:09.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.058 "is_configured": false, 00:13:09.058 "data_offset": 0, 00:13:09.058 "data_size": 63488 00:13:09.058 }, 00:13:09.058 { 00:13:09.058 "name": "BaseBdev2", 00:13:09.058 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:09.058 "is_configured": true, 00:13:09.058 "data_offset": 2048, 00:13:09.058 "data_size": 63488 00:13:09.058 } 00:13:09.058 ] 00:13:09.058 }' 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.058 [2024-12-06 23:47:20.515380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:09.058 [2024-12-06 23:47:20.515438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.058 [2024-12-06 23:47:20.515478] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:09.058 [2024-12-06 23:47:20.515492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.058 [2024-12-06 23:47:20.515943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.058 [2024-12-06 23:47:20.515971] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.058 [2024-12-06 23:47:20.516042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:09.058 [2024-12-06 23:47:20.516059] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:09.058 [2024-12-06 23:47:20.516068] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:09.058 [2024-12-06 23:47:20.516080] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:09.058 BaseBdev1 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.058 23:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:09.998 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.998 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.998 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.998 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.998 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.999 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.999 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.999 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.999 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.999 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.999 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.999 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.999 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.999 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.999 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.258 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.258 "name": "raid_bdev1", 00:13:10.258 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:10.258 "strip_size_kb": 0, 00:13:10.258 "state": "online", 00:13:10.258 "raid_level": "raid1", 00:13:10.258 "superblock": true, 00:13:10.258 "num_base_bdevs": 2, 00:13:10.258 "num_base_bdevs_discovered": 1, 00:13:10.258 "num_base_bdevs_operational": 1, 00:13:10.258 "base_bdevs_list": [ 00:13:10.258 { 00:13:10.258 "name": null, 00:13:10.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.258 "is_configured": false, 00:13:10.258 "data_offset": 0, 00:13:10.258 "data_size": 63488 00:13:10.258 }, 00:13:10.258 { 00:13:10.258 "name": "BaseBdev2", 00:13:10.258 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:10.259 "is_configured": true, 00:13:10.259 "data_offset": 2048, 00:13:10.259 "data_size": 63488 00:13:10.259 } 00:13:10.259 ] 00:13:10.259 }' 00:13:10.259 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.259 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.519 "name": "raid_bdev1", 00:13:10.519 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:10.519 "strip_size_kb": 0, 00:13:10.519 "state": "online", 00:13:10.519 "raid_level": "raid1", 00:13:10.519 "superblock": true, 00:13:10.519 "num_base_bdevs": 2, 00:13:10.519 "num_base_bdevs_discovered": 1, 00:13:10.519 "num_base_bdevs_operational": 1, 00:13:10.519 "base_bdevs_list": [ 00:13:10.519 { 00:13:10.519 "name": null, 00:13:10.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.519 "is_configured": false, 00:13:10.519 "data_offset": 0, 00:13:10.519 "data_size": 63488 00:13:10.519 }, 00:13:10.519 { 00:13:10.519 "name": "BaseBdev2", 00:13:10.519 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:10.519 "is_configured": true, 00:13:10.519 "data_offset": 2048, 00:13:10.519 "data_size": 63488 00:13:10.519 } 00:13:10.519 ] 00:13:10.519 }' 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:10.519 23:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.519 [2024-12-06 23:47:22.052876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.519 [2024-12-06 23:47:22.053024] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:10.519 [2024-12-06 23:47:22.053044] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:10.519 request: 00:13:10.519 { 00:13:10.519 "base_bdev": "BaseBdev1", 00:13:10.519 "raid_bdev": "raid_bdev1", 00:13:10.519 "method": "bdev_raid_add_base_bdev", 00:13:10.519 "req_id": 1 00:13:10.519 } 00:13:10.519 Got JSON-RPC error response 00:13:10.519 response: 00:13:10.519 { 00:13:10.519 "code": -22, 00:13:10.519 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:10.519 } 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:10.519 23:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.901 "name": "raid_bdev1", 00:13:11.901 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:11.901 "strip_size_kb": 0, 00:13:11.901 "state": "online", 00:13:11.901 "raid_level": "raid1", 00:13:11.901 "superblock": true, 00:13:11.901 "num_base_bdevs": 2, 00:13:11.901 "num_base_bdevs_discovered": 1, 00:13:11.901 "num_base_bdevs_operational": 1, 00:13:11.901 "base_bdevs_list": [ 00:13:11.901 { 00:13:11.901 "name": null, 00:13:11.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.901 "is_configured": false, 00:13:11.901 "data_offset": 0, 00:13:11.901 "data_size": 63488 00:13:11.901 }, 00:13:11.901 { 00:13:11.901 "name": "BaseBdev2", 00:13:11.901 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:11.901 "is_configured": true, 00:13:11.901 "data_offset": 2048, 00:13:11.901 "data_size": 63488 00:13:11.901 } 00:13:11.901 ] 00:13:11.901 }' 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.901 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.161 "name": "raid_bdev1", 00:13:12.161 "uuid": "af4536e6-257e-4c99-bfbc-c9a11ea67094", 00:13:12.161 "strip_size_kb": 0, 00:13:12.161 "state": "online", 00:13:12.161 "raid_level": "raid1", 00:13:12.161 "superblock": true, 00:13:12.161 "num_base_bdevs": 2, 00:13:12.161 "num_base_bdevs_discovered": 1, 00:13:12.161 "num_base_bdevs_operational": 1, 00:13:12.161 "base_bdevs_list": [ 00:13:12.161 { 00:13:12.161 "name": null, 00:13:12.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.161 "is_configured": false, 00:13:12.161 "data_offset": 0, 00:13:12.161 "data_size": 63488 00:13:12.161 }, 00:13:12.161 { 00:13:12.161 "name": "BaseBdev2", 00:13:12.161 "uuid": "ba68f2e5-7ceb-52ba-9168-a520b9211b8b", 00:13:12.161 "is_configured": true, 00:13:12.161 "data_offset": 2048, 00:13:12.161 "data_size": 63488 00:13:12.161 } 00:13:12.161 ] 00:13:12.161 }' 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76776 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76776 ']' 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76776 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76776 00:13:12.161 killing process with pid 76776 00:13:12.161 Received shutdown signal, test time was about 18.028047 seconds 00:13:12.161 00:13:12.161 Latency(us) 00:13:12.161 [2024-12-06T23:47:23.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.161 [2024-12-06T23:47:23.724Z] =================================================================================================================== 00:13:12.161 [2024-12-06T23:47:23.724Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76776' 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76776 00:13:12.161 [2024-12-06 23:47:23.692267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.161 [2024-12-06 23:47:23.692382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.161 23:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76776 00:13:12.161 [2024-12-06 23:47:23.692439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.161 [2024-12-06 23:47:23.692449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:12.422 [2024-12-06 23:47:23.911324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:13.818 00:13:13.818 real 0m21.053s 00:13:13.818 user 0m27.373s 00:13:13.818 sys 0m2.189s 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.818 ************************************ 00:13:13.818 END TEST raid_rebuild_test_sb_io 00:13:13.818 ************************************ 00:13:13.818 23:47:25 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:13.818 23:47:25 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:13.818 23:47:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:13.818 23:47:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.818 23:47:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:13.818 ************************************ 00:13:13.818 START TEST raid_rebuild_test 00:13:13.818 ************************************ 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77478 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77478 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77478 ']' 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.818 23:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.818 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:13.818 Zero copy mechanism will not be used. 00:13:13.818 [2024-12-06 23:47:25.195499] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:13:13.818 [2024-12-06 23:47:25.195618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77478 ] 00:13:13.818 [2024-12-06 23:47:25.371888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.076 [2024-12-06 23:47:25.476768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.335 [2024-12-06 23:47:25.666160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.335 [2024-12-06 23:47:25.666216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.595 23:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.595 23:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:14.595 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.595 23:47:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:14.595 23:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.595 23:47:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.595 BaseBdev1_malloc 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.595 [2024-12-06 23:47:26.040606] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:14.595 [2024-12-06 23:47:26.040682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.595 [2024-12-06 23:47:26.040705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:14.595 [2024-12-06 23:47:26.040717] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.595 [2024-12-06 23:47:26.042691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.595 [2024-12-06 23:47:26.042728] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:14.595 BaseBdev1 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.595 BaseBdev2_malloc 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.595 [2024-12-06 23:47:26.089732] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:14.595 [2024-12-06 23:47:26.089790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.595 [2024-12-06 23:47:26.089829] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:14.595 [2024-12-06 23:47:26.089840] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.595 [2024-12-06 23:47:26.091823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.595 [2024-12-06 23:47:26.091864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:14.595 BaseBdev2 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.595 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.855 BaseBdev3_malloc 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.855 [2024-12-06 23:47:26.178280] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:14.855 [2024-12-06 23:47:26.178336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.855 [2024-12-06 23:47:26.178372] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:14.855 [2024-12-06 23:47:26.178383] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.855 [2024-12-06 23:47:26.180340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.855 [2024-12-06 23:47:26.180385] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:14.855 BaseBdev3 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.855 BaseBdev4_malloc 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.855 [2024-12-06 23:47:26.230761] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:14.855 [2024-12-06 23:47:26.230822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.855 [2024-12-06 23:47:26.230857] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:14.855 [2024-12-06 23:47:26.230868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.855 [2024-12-06 23:47:26.232862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.855 [2024-12-06 23:47:26.232903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:14.855 BaseBdev4 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.855 spare_malloc 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.855 spare_delay 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.855 [2024-12-06 23:47:26.291624] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:14.855 [2024-12-06 23:47:26.291701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.855 [2024-12-06 23:47:26.291718] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:14.855 [2024-12-06 23:47:26.291729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.855 [2024-12-06 23:47:26.293700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.855 [2024-12-06 23:47:26.293735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:14.855 spare 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.855 [2024-12-06 23:47:26.303647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.855 [2024-12-06 23:47:26.305394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.855 [2024-12-06 23:47:26.305450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.855 [2024-12-06 23:47:26.305498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:14.855 [2024-12-06 23:47:26.305573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:14.855 [2024-12-06 23:47:26.305585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:14.855 [2024-12-06 23:47:26.305836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:14.855 [2024-12-06 23:47:26.306013] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:14.855 [2024-12-06 23:47:26.306029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:14.855 [2024-12-06 23:47:26.306163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.855 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.855 "name": "raid_bdev1", 00:13:14.855 "uuid": "471a8d06-cc2c-4824-a514-cab8660759af", 00:13:14.855 "strip_size_kb": 0, 00:13:14.855 "state": "online", 00:13:14.855 "raid_level": "raid1", 00:13:14.855 "superblock": false, 00:13:14.855 "num_base_bdevs": 4, 00:13:14.855 "num_base_bdevs_discovered": 4, 00:13:14.855 "num_base_bdevs_operational": 4, 00:13:14.855 "base_bdevs_list": [ 00:13:14.855 { 00:13:14.855 "name": "BaseBdev1", 00:13:14.855 "uuid": "9d88be77-6e83-5c6b-8da7-12e3709dd0fe", 00:13:14.855 "is_configured": true, 00:13:14.855 "data_offset": 0, 00:13:14.855 "data_size": 65536 00:13:14.855 }, 00:13:14.855 { 00:13:14.855 "name": "BaseBdev2", 00:13:14.855 "uuid": "74b55bd1-0d5c-5549-956a-d687f0bd3642", 00:13:14.855 "is_configured": true, 00:13:14.855 "data_offset": 0, 00:13:14.855 "data_size": 65536 00:13:14.855 }, 00:13:14.855 { 00:13:14.855 "name": "BaseBdev3", 00:13:14.855 "uuid": "d1f0a725-bda4-5a56-9f62-c45319844a29", 00:13:14.856 "is_configured": true, 00:13:14.856 "data_offset": 0, 00:13:14.856 "data_size": 65536 00:13:14.856 }, 00:13:14.856 { 00:13:14.856 "name": "BaseBdev4", 00:13:14.856 "uuid": "a12baca4-3a5c-536a-a394-3dbdc997fbd2", 00:13:14.856 "is_configured": true, 00:13:14.856 "data_offset": 0, 00:13:14.856 "data_size": 65536 00:13:14.856 } 00:13:14.856 ] 00:13:14.856 }' 00:13:14.856 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.856 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.425 [2024-12-06 23:47:26.727356] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:15.425 23:47:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:15.686 [2024-12-06 23:47:26.986802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:15.686 /dev/nbd0 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.686 1+0 records in 00:13:15.686 1+0 records out 00:13:15.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010513 s, 3.9 MB/s 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:15.686 23:47:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:20.983 65536+0 records in 00:13:20.983 65536+0 records out 00:13:20.983 33554432 bytes (34 MB, 32 MiB) copied, 5.17983 s, 6.5 MB/s 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:20.983 [2024-12-06 23:47:32.437710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.983 [2024-12-06 23:47:32.465711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.983 "name": "raid_bdev1", 00:13:20.983 "uuid": "471a8d06-cc2c-4824-a514-cab8660759af", 00:13:20.983 "strip_size_kb": 0, 00:13:20.983 "state": "online", 00:13:20.983 "raid_level": "raid1", 00:13:20.983 "superblock": false, 00:13:20.983 "num_base_bdevs": 4, 00:13:20.983 "num_base_bdevs_discovered": 3, 00:13:20.983 "num_base_bdevs_operational": 3, 00:13:20.983 "base_bdevs_list": [ 00:13:20.983 { 00:13:20.983 "name": null, 00:13:20.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.983 "is_configured": false, 00:13:20.983 "data_offset": 0, 00:13:20.983 "data_size": 65536 00:13:20.983 }, 00:13:20.983 { 00:13:20.983 "name": "BaseBdev2", 00:13:20.983 "uuid": "74b55bd1-0d5c-5549-956a-d687f0bd3642", 00:13:20.983 "is_configured": true, 00:13:20.983 "data_offset": 0, 00:13:20.983 "data_size": 65536 00:13:20.983 }, 00:13:20.983 { 00:13:20.983 "name": "BaseBdev3", 00:13:20.983 "uuid": "d1f0a725-bda4-5a56-9f62-c45319844a29", 00:13:20.983 "is_configured": true, 00:13:20.983 "data_offset": 0, 00:13:20.983 "data_size": 65536 00:13:20.983 }, 00:13:20.983 { 00:13:20.983 "name": "BaseBdev4", 00:13:20.983 "uuid": "a12baca4-3a5c-536a-a394-3dbdc997fbd2", 00:13:20.983 "is_configured": true, 00:13:20.983 "data_offset": 0, 00:13:20.983 "data_size": 65536 00:13:20.983 } 00:13:20.983 ] 00:13:20.983 }' 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.983 23:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.555 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:21.555 23:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.555 23:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.555 [2024-12-06 23:47:32.944829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:21.555 [2024-12-06 23:47:32.958153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:21.555 23:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.555 23:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:21.555 [2024-12-06 23:47:32.959981] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.495 23:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.495 23:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.495 23:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.495 23:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.495 23:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.495 23:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.495 23:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.495 23:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.495 23:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.495 23:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.495 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.495 "name": "raid_bdev1", 00:13:22.495 "uuid": "471a8d06-cc2c-4824-a514-cab8660759af", 00:13:22.495 "strip_size_kb": 0, 00:13:22.495 "state": "online", 00:13:22.495 "raid_level": "raid1", 00:13:22.495 "superblock": false, 00:13:22.495 "num_base_bdevs": 4, 00:13:22.495 "num_base_bdevs_discovered": 4, 00:13:22.495 "num_base_bdevs_operational": 4, 00:13:22.495 "process": { 00:13:22.495 "type": "rebuild", 00:13:22.495 "target": "spare", 00:13:22.495 "progress": { 00:13:22.495 "blocks": 20480, 00:13:22.495 "percent": 31 00:13:22.495 } 00:13:22.495 }, 00:13:22.495 "base_bdevs_list": [ 00:13:22.495 { 00:13:22.495 "name": "spare", 00:13:22.495 "uuid": "76c17f80-a6aa-5f62-bd0c-25f0502990d9", 00:13:22.496 "is_configured": true, 00:13:22.496 "data_offset": 0, 00:13:22.496 "data_size": 65536 00:13:22.496 }, 00:13:22.496 { 00:13:22.496 "name": "BaseBdev2", 00:13:22.496 "uuid": "74b55bd1-0d5c-5549-956a-d687f0bd3642", 00:13:22.496 "is_configured": true, 00:13:22.496 "data_offset": 0, 00:13:22.496 "data_size": 65536 00:13:22.496 }, 00:13:22.496 { 00:13:22.496 "name": "BaseBdev3", 00:13:22.496 "uuid": "d1f0a725-bda4-5a56-9f62-c45319844a29", 00:13:22.496 "is_configured": true, 00:13:22.496 "data_offset": 0, 00:13:22.496 "data_size": 65536 00:13:22.496 }, 00:13:22.496 { 00:13:22.496 "name": "BaseBdev4", 00:13:22.496 "uuid": "a12baca4-3a5c-536a-a394-3dbdc997fbd2", 00:13:22.496 "is_configured": true, 00:13:22.496 "data_offset": 0, 00:13:22.496 "data_size": 65536 00:13:22.496 } 00:13:22.496 ] 00:13:22.496 }' 00:13:22.496 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.756 [2024-12-06 23:47:34.107716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.756 [2024-12-06 23:47:34.164613] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:22.756 [2024-12-06 23:47:34.164696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.756 [2024-12-06 23:47:34.164714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.756 [2024-12-06 23:47:34.164723] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.756 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.756 "name": "raid_bdev1", 00:13:22.756 "uuid": "471a8d06-cc2c-4824-a514-cab8660759af", 00:13:22.756 "strip_size_kb": 0, 00:13:22.756 "state": "online", 00:13:22.756 "raid_level": "raid1", 00:13:22.756 "superblock": false, 00:13:22.756 "num_base_bdevs": 4, 00:13:22.756 "num_base_bdevs_discovered": 3, 00:13:22.756 "num_base_bdevs_operational": 3, 00:13:22.756 "base_bdevs_list": [ 00:13:22.756 { 00:13:22.756 "name": null, 00:13:22.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.756 "is_configured": false, 00:13:22.756 "data_offset": 0, 00:13:22.756 "data_size": 65536 00:13:22.756 }, 00:13:22.756 { 00:13:22.756 "name": "BaseBdev2", 00:13:22.756 "uuid": "74b55bd1-0d5c-5549-956a-d687f0bd3642", 00:13:22.756 "is_configured": true, 00:13:22.756 "data_offset": 0, 00:13:22.756 "data_size": 65536 00:13:22.756 }, 00:13:22.756 { 00:13:22.756 "name": "BaseBdev3", 00:13:22.756 "uuid": "d1f0a725-bda4-5a56-9f62-c45319844a29", 00:13:22.756 "is_configured": true, 00:13:22.756 "data_offset": 0, 00:13:22.756 "data_size": 65536 00:13:22.756 }, 00:13:22.757 { 00:13:22.757 "name": "BaseBdev4", 00:13:22.757 "uuid": "a12baca4-3a5c-536a-a394-3dbdc997fbd2", 00:13:22.757 "is_configured": true, 00:13:22.757 "data_offset": 0, 00:13:22.757 "data_size": 65536 00:13:22.757 } 00:13:22.757 ] 00:13:22.757 }' 00:13:22.757 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.757 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.327 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.327 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.327 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.327 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.327 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.327 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.327 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.327 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.327 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.327 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.327 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.327 "name": "raid_bdev1", 00:13:23.327 "uuid": "471a8d06-cc2c-4824-a514-cab8660759af", 00:13:23.327 "strip_size_kb": 0, 00:13:23.327 "state": "online", 00:13:23.327 "raid_level": "raid1", 00:13:23.327 "superblock": false, 00:13:23.327 "num_base_bdevs": 4, 00:13:23.327 "num_base_bdevs_discovered": 3, 00:13:23.327 "num_base_bdevs_operational": 3, 00:13:23.327 "base_bdevs_list": [ 00:13:23.327 { 00:13:23.327 "name": null, 00:13:23.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.327 "is_configured": false, 00:13:23.327 "data_offset": 0, 00:13:23.327 "data_size": 65536 00:13:23.327 }, 00:13:23.328 { 00:13:23.328 "name": "BaseBdev2", 00:13:23.328 "uuid": "74b55bd1-0d5c-5549-956a-d687f0bd3642", 00:13:23.328 "is_configured": true, 00:13:23.328 "data_offset": 0, 00:13:23.328 "data_size": 65536 00:13:23.328 }, 00:13:23.328 { 00:13:23.328 "name": "BaseBdev3", 00:13:23.328 "uuid": "d1f0a725-bda4-5a56-9f62-c45319844a29", 00:13:23.328 "is_configured": true, 00:13:23.328 "data_offset": 0, 00:13:23.328 "data_size": 65536 00:13:23.328 }, 00:13:23.328 { 00:13:23.328 "name": "BaseBdev4", 00:13:23.328 "uuid": "a12baca4-3a5c-536a-a394-3dbdc997fbd2", 00:13:23.328 "is_configured": true, 00:13:23.328 "data_offset": 0, 00:13:23.328 "data_size": 65536 00:13:23.328 } 00:13:23.328 ] 00:13:23.328 }' 00:13:23.328 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.328 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.328 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.328 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.328 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:23.328 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.328 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.328 [2024-12-06 23:47:34.719769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:23.328 [2024-12-06 23:47:34.733137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:23.328 23:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.328 23:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:23.328 [2024-12-06 23:47:34.734921] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:24.268 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.268 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.268 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.269 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.269 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.269 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.269 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.269 23:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.269 23:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.269 23:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.269 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.269 "name": "raid_bdev1", 00:13:24.269 "uuid": "471a8d06-cc2c-4824-a514-cab8660759af", 00:13:24.269 "strip_size_kb": 0, 00:13:24.269 "state": "online", 00:13:24.269 "raid_level": "raid1", 00:13:24.269 "superblock": false, 00:13:24.269 "num_base_bdevs": 4, 00:13:24.269 "num_base_bdevs_discovered": 4, 00:13:24.269 "num_base_bdevs_operational": 4, 00:13:24.269 "process": { 00:13:24.269 "type": "rebuild", 00:13:24.269 "target": "spare", 00:13:24.269 "progress": { 00:13:24.269 "blocks": 20480, 00:13:24.269 "percent": 31 00:13:24.269 } 00:13:24.269 }, 00:13:24.269 "base_bdevs_list": [ 00:13:24.269 { 00:13:24.269 "name": "spare", 00:13:24.269 "uuid": "76c17f80-a6aa-5f62-bd0c-25f0502990d9", 00:13:24.269 "is_configured": true, 00:13:24.269 "data_offset": 0, 00:13:24.269 "data_size": 65536 00:13:24.269 }, 00:13:24.269 { 00:13:24.269 "name": "BaseBdev2", 00:13:24.269 "uuid": "74b55bd1-0d5c-5549-956a-d687f0bd3642", 00:13:24.269 "is_configured": true, 00:13:24.269 "data_offset": 0, 00:13:24.269 "data_size": 65536 00:13:24.269 }, 00:13:24.269 { 00:13:24.269 "name": "BaseBdev3", 00:13:24.269 "uuid": "d1f0a725-bda4-5a56-9f62-c45319844a29", 00:13:24.269 "is_configured": true, 00:13:24.269 "data_offset": 0, 00:13:24.269 "data_size": 65536 00:13:24.269 }, 00:13:24.269 { 00:13:24.269 "name": "BaseBdev4", 00:13:24.269 "uuid": "a12baca4-3a5c-536a-a394-3dbdc997fbd2", 00:13:24.269 "is_configured": true, 00:13:24.269 "data_offset": 0, 00:13:24.269 "data_size": 65536 00:13:24.269 } 00:13:24.269 ] 00:13:24.269 }' 00:13:24.269 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.530 [2024-12-06 23:47:35.894512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:24.530 [2024-12-06 23:47:35.939441] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.530 "name": "raid_bdev1", 00:13:24.530 "uuid": "471a8d06-cc2c-4824-a514-cab8660759af", 00:13:24.530 "strip_size_kb": 0, 00:13:24.530 "state": "online", 00:13:24.530 "raid_level": "raid1", 00:13:24.530 "superblock": false, 00:13:24.530 "num_base_bdevs": 4, 00:13:24.530 "num_base_bdevs_discovered": 3, 00:13:24.530 "num_base_bdevs_operational": 3, 00:13:24.530 "process": { 00:13:24.530 "type": "rebuild", 00:13:24.530 "target": "spare", 00:13:24.530 "progress": { 00:13:24.530 "blocks": 24576, 00:13:24.530 "percent": 37 00:13:24.530 } 00:13:24.530 }, 00:13:24.530 "base_bdevs_list": [ 00:13:24.530 { 00:13:24.530 "name": "spare", 00:13:24.530 "uuid": "76c17f80-a6aa-5f62-bd0c-25f0502990d9", 00:13:24.530 "is_configured": true, 00:13:24.530 "data_offset": 0, 00:13:24.530 "data_size": 65536 00:13:24.530 }, 00:13:24.530 { 00:13:24.530 "name": null, 00:13:24.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.530 "is_configured": false, 00:13:24.530 "data_offset": 0, 00:13:24.530 "data_size": 65536 00:13:24.530 }, 00:13:24.530 { 00:13:24.530 "name": "BaseBdev3", 00:13:24.530 "uuid": "d1f0a725-bda4-5a56-9f62-c45319844a29", 00:13:24.530 "is_configured": true, 00:13:24.530 "data_offset": 0, 00:13:24.530 "data_size": 65536 00:13:24.530 }, 00:13:24.530 { 00:13:24.530 "name": "BaseBdev4", 00:13:24.530 "uuid": "a12baca4-3a5c-536a-a394-3dbdc997fbd2", 00:13:24.530 "is_configured": true, 00:13:24.530 "data_offset": 0, 00:13:24.530 "data_size": 65536 00:13:24.530 } 00:13:24.530 ] 00:13:24.530 }' 00:13:24.530 23:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=445 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.530 23:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.790 23:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.790 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.790 "name": "raid_bdev1", 00:13:24.790 "uuid": "471a8d06-cc2c-4824-a514-cab8660759af", 00:13:24.790 "strip_size_kb": 0, 00:13:24.790 "state": "online", 00:13:24.790 "raid_level": "raid1", 00:13:24.790 "superblock": false, 00:13:24.790 "num_base_bdevs": 4, 00:13:24.790 "num_base_bdevs_discovered": 3, 00:13:24.790 "num_base_bdevs_operational": 3, 00:13:24.790 "process": { 00:13:24.790 "type": "rebuild", 00:13:24.790 "target": "spare", 00:13:24.790 "progress": { 00:13:24.790 "blocks": 26624, 00:13:24.790 "percent": 40 00:13:24.790 } 00:13:24.790 }, 00:13:24.790 "base_bdevs_list": [ 00:13:24.790 { 00:13:24.790 "name": "spare", 00:13:24.790 "uuid": "76c17f80-a6aa-5f62-bd0c-25f0502990d9", 00:13:24.790 "is_configured": true, 00:13:24.790 "data_offset": 0, 00:13:24.790 "data_size": 65536 00:13:24.790 }, 00:13:24.790 { 00:13:24.790 "name": null, 00:13:24.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.790 "is_configured": false, 00:13:24.790 "data_offset": 0, 00:13:24.790 "data_size": 65536 00:13:24.790 }, 00:13:24.790 { 00:13:24.790 "name": "BaseBdev3", 00:13:24.790 "uuid": "d1f0a725-bda4-5a56-9f62-c45319844a29", 00:13:24.790 "is_configured": true, 00:13:24.790 "data_offset": 0, 00:13:24.790 "data_size": 65536 00:13:24.790 }, 00:13:24.790 { 00:13:24.790 "name": "BaseBdev4", 00:13:24.790 "uuid": "a12baca4-3a5c-536a-a394-3dbdc997fbd2", 00:13:24.790 "is_configured": true, 00:13:24.790 "data_offset": 0, 00:13:24.790 "data_size": 65536 00:13:24.790 } 00:13:24.790 ] 00:13:24.790 }' 00:13:24.790 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.790 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.790 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.790 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.790 23:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.730 "name": "raid_bdev1", 00:13:25.730 "uuid": "471a8d06-cc2c-4824-a514-cab8660759af", 00:13:25.730 "strip_size_kb": 0, 00:13:25.730 "state": "online", 00:13:25.730 "raid_level": "raid1", 00:13:25.730 "superblock": false, 00:13:25.730 "num_base_bdevs": 4, 00:13:25.730 "num_base_bdevs_discovered": 3, 00:13:25.730 "num_base_bdevs_operational": 3, 00:13:25.730 "process": { 00:13:25.730 "type": "rebuild", 00:13:25.730 "target": "spare", 00:13:25.730 "progress": { 00:13:25.730 "blocks": 49152, 00:13:25.730 "percent": 75 00:13:25.730 } 00:13:25.730 }, 00:13:25.730 "base_bdevs_list": [ 00:13:25.730 { 00:13:25.730 "name": "spare", 00:13:25.730 "uuid": "76c17f80-a6aa-5f62-bd0c-25f0502990d9", 00:13:25.730 "is_configured": true, 00:13:25.730 "data_offset": 0, 00:13:25.730 "data_size": 65536 00:13:25.730 }, 00:13:25.730 { 00:13:25.730 "name": null, 00:13:25.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.730 "is_configured": false, 00:13:25.730 "data_offset": 0, 00:13:25.730 "data_size": 65536 00:13:25.730 }, 00:13:25.730 { 00:13:25.730 "name": "BaseBdev3", 00:13:25.730 "uuid": "d1f0a725-bda4-5a56-9f62-c45319844a29", 00:13:25.730 "is_configured": true, 00:13:25.730 "data_offset": 0, 00:13:25.730 "data_size": 65536 00:13:25.730 }, 00:13:25.730 { 00:13:25.730 "name": "BaseBdev4", 00:13:25.730 "uuid": "a12baca4-3a5c-536a-a394-3dbdc997fbd2", 00:13:25.730 "is_configured": true, 00:13:25.730 "data_offset": 0, 00:13:25.730 "data_size": 65536 00:13:25.730 } 00:13:25.730 ] 00:13:25.730 }' 00:13:25.730 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.990 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.990 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.990 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.990 23:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.559 [2024-12-06 23:47:37.946959] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:26.559 [2024-12-06 23:47:37.947028] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:26.559 [2024-12-06 23:47:37.947072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.818 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.818 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.818 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.818 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.818 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.818 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.818 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.818 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.818 23:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.818 23:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.076 "name": "raid_bdev1", 00:13:27.076 "uuid": "471a8d06-cc2c-4824-a514-cab8660759af", 00:13:27.076 "strip_size_kb": 0, 00:13:27.076 "state": "online", 00:13:27.076 "raid_level": "raid1", 00:13:27.076 "superblock": false, 00:13:27.076 "num_base_bdevs": 4, 00:13:27.076 "num_base_bdevs_discovered": 3, 00:13:27.076 "num_base_bdevs_operational": 3, 00:13:27.076 "base_bdevs_list": [ 00:13:27.076 { 00:13:27.076 "name": "spare", 00:13:27.076 "uuid": "76c17f80-a6aa-5f62-bd0c-25f0502990d9", 00:13:27.076 "is_configured": true, 00:13:27.076 "data_offset": 0, 00:13:27.076 "data_size": 65536 00:13:27.076 }, 00:13:27.076 { 00:13:27.076 "name": null, 00:13:27.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.076 "is_configured": false, 00:13:27.076 "data_offset": 0, 00:13:27.076 "data_size": 65536 00:13:27.076 }, 00:13:27.076 { 00:13:27.076 "name": "BaseBdev3", 00:13:27.076 "uuid": "d1f0a725-bda4-5a56-9f62-c45319844a29", 00:13:27.076 "is_configured": true, 00:13:27.076 "data_offset": 0, 00:13:27.076 "data_size": 65536 00:13:27.076 }, 00:13:27.076 { 00:13:27.076 "name": "BaseBdev4", 00:13:27.076 "uuid": "a12baca4-3a5c-536a-a394-3dbdc997fbd2", 00:13:27.076 "is_configured": true, 00:13:27.076 "data_offset": 0, 00:13:27.076 "data_size": 65536 00:13:27.076 } 00:13:27.076 ] 00:13:27.076 }' 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.076 "name": "raid_bdev1", 00:13:27.076 "uuid": "471a8d06-cc2c-4824-a514-cab8660759af", 00:13:27.076 "strip_size_kb": 0, 00:13:27.076 "state": "online", 00:13:27.076 "raid_level": "raid1", 00:13:27.076 "superblock": false, 00:13:27.076 "num_base_bdevs": 4, 00:13:27.076 "num_base_bdevs_discovered": 3, 00:13:27.076 "num_base_bdevs_operational": 3, 00:13:27.076 "base_bdevs_list": [ 00:13:27.076 { 00:13:27.076 "name": "spare", 00:13:27.076 "uuid": "76c17f80-a6aa-5f62-bd0c-25f0502990d9", 00:13:27.076 "is_configured": true, 00:13:27.076 "data_offset": 0, 00:13:27.076 "data_size": 65536 00:13:27.076 }, 00:13:27.076 { 00:13:27.076 "name": null, 00:13:27.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.076 "is_configured": false, 00:13:27.076 "data_offset": 0, 00:13:27.076 "data_size": 65536 00:13:27.076 }, 00:13:27.076 { 00:13:27.076 "name": "BaseBdev3", 00:13:27.076 "uuid": "d1f0a725-bda4-5a56-9f62-c45319844a29", 00:13:27.076 "is_configured": true, 00:13:27.076 "data_offset": 0, 00:13:27.076 "data_size": 65536 00:13:27.076 }, 00:13:27.076 { 00:13:27.076 "name": "BaseBdev4", 00:13:27.076 "uuid": "a12baca4-3a5c-536a-a394-3dbdc997fbd2", 00:13:27.076 "is_configured": true, 00:13:27.076 "data_offset": 0, 00:13:27.076 "data_size": 65536 00:13:27.076 } 00:13:27.076 ] 00:13:27.076 }' 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.076 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.335 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:27.335 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:27.335 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.335 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.335 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.335 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.336 "name": "raid_bdev1", 00:13:27.336 "uuid": "471a8d06-cc2c-4824-a514-cab8660759af", 00:13:27.336 "strip_size_kb": 0, 00:13:27.336 "state": "online", 00:13:27.336 "raid_level": "raid1", 00:13:27.336 "superblock": false, 00:13:27.336 "num_base_bdevs": 4, 00:13:27.336 "num_base_bdevs_discovered": 3, 00:13:27.336 "num_base_bdevs_operational": 3, 00:13:27.336 "base_bdevs_list": [ 00:13:27.336 { 00:13:27.336 "name": "spare", 00:13:27.336 "uuid": "76c17f80-a6aa-5f62-bd0c-25f0502990d9", 00:13:27.336 "is_configured": true, 00:13:27.336 "data_offset": 0, 00:13:27.336 "data_size": 65536 00:13:27.336 }, 00:13:27.336 { 00:13:27.336 "name": null, 00:13:27.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.336 "is_configured": false, 00:13:27.336 "data_offset": 0, 00:13:27.336 "data_size": 65536 00:13:27.336 }, 00:13:27.336 { 00:13:27.336 "name": "BaseBdev3", 00:13:27.336 "uuid": "d1f0a725-bda4-5a56-9f62-c45319844a29", 00:13:27.336 "is_configured": true, 00:13:27.336 "data_offset": 0, 00:13:27.336 "data_size": 65536 00:13:27.336 }, 00:13:27.336 { 00:13:27.336 "name": "BaseBdev4", 00:13:27.336 "uuid": "a12baca4-3a5c-536a-a394-3dbdc997fbd2", 00:13:27.336 "is_configured": true, 00:13:27.336 "data_offset": 0, 00:13:27.336 "data_size": 65536 00:13:27.336 } 00:13:27.336 ] 00:13:27.336 }' 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.336 23:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.595 [2024-12-06 23:47:39.089106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:27.595 [2024-12-06 23:47:39.089136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.595 [2024-12-06 23:47:39.089209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.595 [2024-12-06 23:47:39.089282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.595 [2024-12-06 23:47:39.089291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:27.595 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:27.853 /dev/nbd0 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.853 1+0 records in 00:13:27.853 1+0 records out 00:13:27.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483851 s, 8.5 MB/s 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:27.853 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:28.113 /dev/nbd1 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.113 1+0 records in 00:13:28.113 1+0 records out 00:13:28.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437634 s, 9.4 MB/s 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.113 23:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:28.372 23:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:28.372 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.372 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:28.372 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:28.372 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:28.372 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.372 23:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:28.631 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:28.631 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:28.631 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:28.631 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.631 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.631 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:28.631 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:28.631 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.631 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.631 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:28.890 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:28.890 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77478 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77478 ']' 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77478 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77478 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77478' 00:13:28.891 killing process with pid 77478 00:13:28.891 Received shutdown signal, test time was about 60.000000 seconds 00:13:28.891 00:13:28.891 Latency(us) 00:13:28.891 [2024-12-06T23:47:40.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.891 [2024-12-06T23:47:40.454Z] =================================================================================================================== 00:13:28.891 [2024-12-06T23:47:40.454Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77478 00:13:28.891 [2024-12-06 23:47:40.260262] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:28.891 23:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77478 00:13:29.459 [2024-12-06 23:47:40.719890] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:30.398 00:13:30.398 real 0m16.678s 00:13:30.398 user 0m18.937s 00:13:30.398 sys 0m3.022s 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.398 ************************************ 00:13:30.398 END TEST raid_rebuild_test 00:13:30.398 ************************************ 00:13:30.398 23:47:41 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:30.398 23:47:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:30.398 23:47:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.398 23:47:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:30.398 ************************************ 00:13:30.398 START TEST raid_rebuild_test_sb 00:13:30.398 ************************************ 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77919 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77919 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77919 ']' 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.398 23:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.398 [2024-12-06 23:47:41.947361] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:13:30.398 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:30.398 Zero copy mechanism will not be used. 00:13:30.398 [2024-12-06 23:47:41.947593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77919 ] 00:13:30.658 [2024-12-06 23:47:42.119309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.918 [2024-12-06 23:47:42.225286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.918 [2024-12-06 23:47:42.400027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.918 [2024-12-06 23:47:42.400076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.488 BaseBdev1_malloc 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.488 [2024-12-06 23:47:42.804237] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:31.488 [2024-12-06 23:47:42.804375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.488 [2024-12-06 23:47:42.804399] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:31.488 [2024-12-06 23:47:42.804411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.488 [2024-12-06 23:47:42.806440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.488 [2024-12-06 23:47:42.806483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:31.488 BaseBdev1 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.488 BaseBdev2_malloc 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.488 [2024-12-06 23:47:42.858128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:31.488 [2024-12-06 23:47:42.858191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.488 [2024-12-06 23:47:42.858212] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:31.488 [2024-12-06 23:47:42.858223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.488 [2024-12-06 23:47:42.860225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.488 [2024-12-06 23:47:42.860265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:31.488 BaseBdev2 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.488 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.489 BaseBdev3_malloc 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.489 [2024-12-06 23:47:42.943287] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:31.489 [2024-12-06 23:47:42.943344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.489 [2024-12-06 23:47:42.943363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:31.489 [2024-12-06 23:47:42.943373] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.489 [2024-12-06 23:47:42.945426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.489 [2024-12-06 23:47:42.945468] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:31.489 BaseBdev3 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.489 BaseBdev4_malloc 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.489 [2024-12-06 23:47:42.991680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:31.489 [2024-12-06 23:47:42.991752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.489 [2024-12-06 23:47:42.991775] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:31.489 [2024-12-06 23:47:42.991786] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.489 [2024-12-06 23:47:42.993809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.489 [2024-12-06 23:47:42.993849] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:31.489 BaseBdev4 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.489 23:47:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.489 spare_malloc 00:13:31.489 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.489 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:31.489 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.489 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.749 spare_delay 00:13:31.749 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.749 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:31.749 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.749 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.749 [2024-12-06 23:47:43.055026] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:31.749 [2024-12-06 23:47:43.055078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.749 [2024-12-06 23:47:43.055096] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:31.749 [2024-12-06 23:47:43.055106] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.749 [2024-12-06 23:47:43.057088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.749 [2024-12-06 23:47:43.057128] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:31.749 spare 00:13:31.749 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.749 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:31.749 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.749 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.749 [2024-12-06 23:47:43.067052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.749 [2024-12-06 23:47:43.068872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.749 [2024-12-06 23:47:43.068930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.749 [2024-12-06 23:47:43.068976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:31.749 [2024-12-06 23:47:43.069137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:31.749 [2024-12-06 23:47:43.069151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:31.749 [2024-12-06 23:47:43.069375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:31.749 [2024-12-06 23:47:43.069531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:31.749 [2024-12-06 23:47:43.069540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:31.749 [2024-12-06 23:47:43.069704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.749 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.750 "name": "raid_bdev1", 00:13:31.750 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:31.750 "strip_size_kb": 0, 00:13:31.750 "state": "online", 00:13:31.750 "raid_level": "raid1", 00:13:31.750 "superblock": true, 00:13:31.750 "num_base_bdevs": 4, 00:13:31.750 "num_base_bdevs_discovered": 4, 00:13:31.750 "num_base_bdevs_operational": 4, 00:13:31.750 "base_bdevs_list": [ 00:13:31.750 { 00:13:31.750 "name": "BaseBdev1", 00:13:31.750 "uuid": "9e34e365-8e6f-5219-898b-70556e6e211f", 00:13:31.750 "is_configured": true, 00:13:31.750 "data_offset": 2048, 00:13:31.750 "data_size": 63488 00:13:31.750 }, 00:13:31.750 { 00:13:31.750 "name": "BaseBdev2", 00:13:31.750 "uuid": "5ed64053-d9a4-5c8c-ab4c-2db399f47e2b", 00:13:31.750 "is_configured": true, 00:13:31.750 "data_offset": 2048, 00:13:31.750 "data_size": 63488 00:13:31.750 }, 00:13:31.750 { 00:13:31.750 "name": "BaseBdev3", 00:13:31.750 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:31.750 "is_configured": true, 00:13:31.750 "data_offset": 2048, 00:13:31.750 "data_size": 63488 00:13:31.750 }, 00:13:31.750 { 00:13:31.750 "name": "BaseBdev4", 00:13:31.750 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:31.750 "is_configured": true, 00:13:31.750 "data_offset": 2048, 00:13:31.750 "data_size": 63488 00:13:31.750 } 00:13:31.750 ] 00:13:31.750 }' 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.750 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.010 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:32.010 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:32.010 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.010 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.010 [2024-12-06 23:47:43.558508] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.272 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:32.272 [2024-12-06 23:47:43.825797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:32.554 /dev/nbd0 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.554 1+0 records in 00:13:32.554 1+0 records out 00:13:32.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447306 s, 9.2 MB/s 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:32.554 23:47:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:37.833 63488+0 records in 00:13:37.833 63488+0 records out 00:13:37.833 32505856 bytes (33 MB, 31 MiB) copied, 5.31817 s, 6.1 MB/s 00:13:37.833 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:37.833 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.833 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:37.833 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.833 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:37.833 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.833 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:38.094 [2024-12-06 23:47:49.418736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.094 [2024-12-06 23:47:49.454762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.094 "name": "raid_bdev1", 00:13:38.094 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:38.094 "strip_size_kb": 0, 00:13:38.094 "state": "online", 00:13:38.094 "raid_level": "raid1", 00:13:38.094 "superblock": true, 00:13:38.094 "num_base_bdevs": 4, 00:13:38.094 "num_base_bdevs_discovered": 3, 00:13:38.094 "num_base_bdevs_operational": 3, 00:13:38.094 "base_bdevs_list": [ 00:13:38.094 { 00:13:38.094 "name": null, 00:13:38.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.094 "is_configured": false, 00:13:38.094 "data_offset": 0, 00:13:38.094 "data_size": 63488 00:13:38.094 }, 00:13:38.094 { 00:13:38.094 "name": "BaseBdev2", 00:13:38.094 "uuid": "5ed64053-d9a4-5c8c-ab4c-2db399f47e2b", 00:13:38.094 "is_configured": true, 00:13:38.094 "data_offset": 2048, 00:13:38.094 "data_size": 63488 00:13:38.094 }, 00:13:38.094 { 00:13:38.094 "name": "BaseBdev3", 00:13:38.094 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:38.094 "is_configured": true, 00:13:38.094 "data_offset": 2048, 00:13:38.094 "data_size": 63488 00:13:38.094 }, 00:13:38.094 { 00:13:38.094 "name": "BaseBdev4", 00:13:38.094 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:38.094 "is_configured": true, 00:13:38.094 "data_offset": 2048, 00:13:38.094 "data_size": 63488 00:13:38.094 } 00:13:38.094 ] 00:13:38.094 }' 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.094 23:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.355 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.355 23:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.355 23:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.355 [2024-12-06 23:47:49.905943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.615 [2024-12-06 23:47:49.921548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:38.615 23:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.615 23:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:38.615 [2024-12-06 23:47:49.923374] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:39.557 23:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.557 23:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.557 23:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.557 23:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.557 23:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.557 23:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.557 23:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.557 23:47:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.557 23:47:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.557 23:47:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.557 23:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.557 "name": "raid_bdev1", 00:13:39.557 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:39.557 "strip_size_kb": 0, 00:13:39.557 "state": "online", 00:13:39.557 "raid_level": "raid1", 00:13:39.557 "superblock": true, 00:13:39.557 "num_base_bdevs": 4, 00:13:39.557 "num_base_bdevs_discovered": 4, 00:13:39.557 "num_base_bdevs_operational": 4, 00:13:39.557 "process": { 00:13:39.557 "type": "rebuild", 00:13:39.557 "target": "spare", 00:13:39.557 "progress": { 00:13:39.557 "blocks": 20480, 00:13:39.557 "percent": 32 00:13:39.557 } 00:13:39.557 }, 00:13:39.557 "base_bdevs_list": [ 00:13:39.557 { 00:13:39.557 "name": "spare", 00:13:39.557 "uuid": "7c4b4030-4869-5fb7-98c2-74512a5f3350", 00:13:39.557 "is_configured": true, 00:13:39.557 "data_offset": 2048, 00:13:39.557 "data_size": 63488 00:13:39.557 }, 00:13:39.557 { 00:13:39.557 "name": "BaseBdev2", 00:13:39.557 "uuid": "5ed64053-d9a4-5c8c-ab4c-2db399f47e2b", 00:13:39.557 "is_configured": true, 00:13:39.557 "data_offset": 2048, 00:13:39.557 "data_size": 63488 00:13:39.557 }, 00:13:39.557 { 00:13:39.557 "name": "BaseBdev3", 00:13:39.557 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:39.557 "is_configured": true, 00:13:39.557 "data_offset": 2048, 00:13:39.557 "data_size": 63488 00:13:39.557 }, 00:13:39.557 { 00:13:39.557 "name": "BaseBdev4", 00:13:39.557 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:39.557 "is_configured": true, 00:13:39.557 "data_offset": 2048, 00:13:39.557 "data_size": 63488 00:13:39.557 } 00:13:39.557 ] 00:13:39.557 }' 00:13:39.557 23:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.557 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.557 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.557 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.557 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:39.557 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.557 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.557 [2024-12-06 23:47:51.055059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.818 [2024-12-06 23:47:51.127918] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:39.818 [2024-12-06 23:47:51.128046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.818 [2024-12-06 23:47:51.128083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.818 [2024-12-06 23:47:51.128097] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.818 "name": "raid_bdev1", 00:13:39.818 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:39.818 "strip_size_kb": 0, 00:13:39.818 "state": "online", 00:13:39.818 "raid_level": "raid1", 00:13:39.818 "superblock": true, 00:13:39.818 "num_base_bdevs": 4, 00:13:39.818 "num_base_bdevs_discovered": 3, 00:13:39.818 "num_base_bdevs_operational": 3, 00:13:39.818 "base_bdevs_list": [ 00:13:39.818 { 00:13:39.818 "name": null, 00:13:39.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.818 "is_configured": false, 00:13:39.818 "data_offset": 0, 00:13:39.818 "data_size": 63488 00:13:39.818 }, 00:13:39.818 { 00:13:39.818 "name": "BaseBdev2", 00:13:39.818 "uuid": "5ed64053-d9a4-5c8c-ab4c-2db399f47e2b", 00:13:39.818 "is_configured": true, 00:13:39.818 "data_offset": 2048, 00:13:39.818 "data_size": 63488 00:13:39.818 }, 00:13:39.818 { 00:13:39.818 "name": "BaseBdev3", 00:13:39.818 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:39.818 "is_configured": true, 00:13:39.818 "data_offset": 2048, 00:13:39.818 "data_size": 63488 00:13:39.818 }, 00:13:39.818 { 00:13:39.818 "name": "BaseBdev4", 00:13:39.818 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:39.818 "is_configured": true, 00:13:39.818 "data_offset": 2048, 00:13:39.818 "data_size": 63488 00:13:39.818 } 00:13:39.818 ] 00:13:39.818 }' 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.818 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.078 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:40.078 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.078 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:40.078 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:40.078 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.078 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.078 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.078 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.078 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.079 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.079 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.079 "name": "raid_bdev1", 00:13:40.079 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:40.079 "strip_size_kb": 0, 00:13:40.079 "state": "online", 00:13:40.079 "raid_level": "raid1", 00:13:40.079 "superblock": true, 00:13:40.079 "num_base_bdevs": 4, 00:13:40.079 "num_base_bdevs_discovered": 3, 00:13:40.079 "num_base_bdevs_operational": 3, 00:13:40.079 "base_bdevs_list": [ 00:13:40.079 { 00:13:40.079 "name": null, 00:13:40.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.079 "is_configured": false, 00:13:40.079 "data_offset": 0, 00:13:40.079 "data_size": 63488 00:13:40.079 }, 00:13:40.079 { 00:13:40.079 "name": "BaseBdev2", 00:13:40.079 "uuid": "5ed64053-d9a4-5c8c-ab4c-2db399f47e2b", 00:13:40.079 "is_configured": true, 00:13:40.079 "data_offset": 2048, 00:13:40.079 "data_size": 63488 00:13:40.079 }, 00:13:40.079 { 00:13:40.079 "name": "BaseBdev3", 00:13:40.079 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:40.079 "is_configured": true, 00:13:40.079 "data_offset": 2048, 00:13:40.079 "data_size": 63488 00:13:40.079 }, 00:13:40.079 { 00:13:40.079 "name": "BaseBdev4", 00:13:40.079 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:40.079 "is_configured": true, 00:13:40.079 "data_offset": 2048, 00:13:40.079 "data_size": 63488 00:13:40.079 } 00:13:40.079 ] 00:13:40.079 }' 00:13:40.079 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.339 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.339 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.339 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:40.339 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:40.339 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.339 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.339 [2024-12-06 23:47:51.739233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.339 [2024-12-06 23:47:51.753017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:40.339 23:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.339 23:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:40.339 [2024-12-06 23:47:51.755022] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.282 "name": "raid_bdev1", 00:13:41.282 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:41.282 "strip_size_kb": 0, 00:13:41.282 "state": "online", 00:13:41.282 "raid_level": "raid1", 00:13:41.282 "superblock": true, 00:13:41.282 "num_base_bdevs": 4, 00:13:41.282 "num_base_bdevs_discovered": 4, 00:13:41.282 "num_base_bdevs_operational": 4, 00:13:41.282 "process": { 00:13:41.282 "type": "rebuild", 00:13:41.282 "target": "spare", 00:13:41.282 "progress": { 00:13:41.282 "blocks": 20480, 00:13:41.282 "percent": 32 00:13:41.282 } 00:13:41.282 }, 00:13:41.282 "base_bdevs_list": [ 00:13:41.282 { 00:13:41.282 "name": "spare", 00:13:41.282 "uuid": "7c4b4030-4869-5fb7-98c2-74512a5f3350", 00:13:41.282 "is_configured": true, 00:13:41.282 "data_offset": 2048, 00:13:41.282 "data_size": 63488 00:13:41.282 }, 00:13:41.282 { 00:13:41.282 "name": "BaseBdev2", 00:13:41.282 "uuid": "5ed64053-d9a4-5c8c-ab4c-2db399f47e2b", 00:13:41.282 "is_configured": true, 00:13:41.282 "data_offset": 2048, 00:13:41.282 "data_size": 63488 00:13:41.282 }, 00:13:41.282 { 00:13:41.282 "name": "BaseBdev3", 00:13:41.282 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:41.282 "is_configured": true, 00:13:41.282 "data_offset": 2048, 00:13:41.282 "data_size": 63488 00:13:41.282 }, 00:13:41.282 { 00:13:41.282 "name": "BaseBdev4", 00:13:41.282 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:41.282 "is_configured": true, 00:13:41.282 "data_offset": 2048, 00:13:41.282 "data_size": 63488 00:13:41.282 } 00:13:41.282 ] 00:13:41.282 }' 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.282 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.543 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.543 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:41.543 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:41.543 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:41.543 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:41.543 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:41.543 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:41.543 23:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:41.543 23:47:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.543 23:47:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.543 [2024-12-06 23:47:52.898083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:41.543 [2024-12-06 23:47:53.059372] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.543 23:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.804 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.804 "name": "raid_bdev1", 00:13:41.804 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:41.804 "strip_size_kb": 0, 00:13:41.804 "state": "online", 00:13:41.804 "raid_level": "raid1", 00:13:41.804 "superblock": true, 00:13:41.804 "num_base_bdevs": 4, 00:13:41.804 "num_base_bdevs_discovered": 3, 00:13:41.804 "num_base_bdevs_operational": 3, 00:13:41.804 "process": { 00:13:41.804 "type": "rebuild", 00:13:41.804 "target": "spare", 00:13:41.804 "progress": { 00:13:41.804 "blocks": 24576, 00:13:41.804 "percent": 38 00:13:41.804 } 00:13:41.804 }, 00:13:41.804 "base_bdevs_list": [ 00:13:41.804 { 00:13:41.804 "name": "spare", 00:13:41.804 "uuid": "7c4b4030-4869-5fb7-98c2-74512a5f3350", 00:13:41.804 "is_configured": true, 00:13:41.804 "data_offset": 2048, 00:13:41.804 "data_size": 63488 00:13:41.804 }, 00:13:41.804 { 00:13:41.804 "name": null, 00:13:41.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.804 "is_configured": false, 00:13:41.804 "data_offset": 0, 00:13:41.804 "data_size": 63488 00:13:41.804 }, 00:13:41.804 { 00:13:41.804 "name": "BaseBdev3", 00:13:41.804 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:41.804 "is_configured": true, 00:13:41.804 "data_offset": 2048, 00:13:41.804 "data_size": 63488 00:13:41.804 }, 00:13:41.804 { 00:13:41.804 "name": "BaseBdev4", 00:13:41.804 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:41.805 "is_configured": true, 00:13:41.805 "data_offset": 2048, 00:13:41.805 "data_size": 63488 00:13:41.805 } 00:13:41.805 ] 00:13:41.805 }' 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=462 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.805 "name": "raid_bdev1", 00:13:41.805 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:41.805 "strip_size_kb": 0, 00:13:41.805 "state": "online", 00:13:41.805 "raid_level": "raid1", 00:13:41.805 "superblock": true, 00:13:41.805 "num_base_bdevs": 4, 00:13:41.805 "num_base_bdevs_discovered": 3, 00:13:41.805 "num_base_bdevs_operational": 3, 00:13:41.805 "process": { 00:13:41.805 "type": "rebuild", 00:13:41.805 "target": "spare", 00:13:41.805 "progress": { 00:13:41.805 "blocks": 26624, 00:13:41.805 "percent": 41 00:13:41.805 } 00:13:41.805 }, 00:13:41.805 "base_bdevs_list": [ 00:13:41.805 { 00:13:41.805 "name": "spare", 00:13:41.805 "uuid": "7c4b4030-4869-5fb7-98c2-74512a5f3350", 00:13:41.805 "is_configured": true, 00:13:41.805 "data_offset": 2048, 00:13:41.805 "data_size": 63488 00:13:41.805 }, 00:13:41.805 { 00:13:41.805 "name": null, 00:13:41.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.805 "is_configured": false, 00:13:41.805 "data_offset": 0, 00:13:41.805 "data_size": 63488 00:13:41.805 }, 00:13:41.805 { 00:13:41.805 "name": "BaseBdev3", 00:13:41.805 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:41.805 "is_configured": true, 00:13:41.805 "data_offset": 2048, 00:13:41.805 "data_size": 63488 00:13:41.805 }, 00:13:41.805 { 00:13:41.805 "name": "BaseBdev4", 00:13:41.805 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:41.805 "is_configured": true, 00:13:41.805 "data_offset": 2048, 00:13:41.805 "data_size": 63488 00:13:41.805 } 00:13:41.805 ] 00:13:41.805 }' 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.805 23:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.187 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.187 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.187 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.187 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.187 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.187 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.187 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.187 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.187 23:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.187 23:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.187 23:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.187 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.187 "name": "raid_bdev1", 00:13:43.187 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:43.187 "strip_size_kb": 0, 00:13:43.187 "state": "online", 00:13:43.187 "raid_level": "raid1", 00:13:43.187 "superblock": true, 00:13:43.187 "num_base_bdevs": 4, 00:13:43.187 "num_base_bdevs_discovered": 3, 00:13:43.187 "num_base_bdevs_operational": 3, 00:13:43.187 "process": { 00:13:43.187 "type": "rebuild", 00:13:43.187 "target": "spare", 00:13:43.187 "progress": { 00:13:43.187 "blocks": 49152, 00:13:43.187 "percent": 77 00:13:43.187 } 00:13:43.187 }, 00:13:43.187 "base_bdevs_list": [ 00:13:43.187 { 00:13:43.187 "name": "spare", 00:13:43.187 "uuid": "7c4b4030-4869-5fb7-98c2-74512a5f3350", 00:13:43.187 "is_configured": true, 00:13:43.187 "data_offset": 2048, 00:13:43.187 "data_size": 63488 00:13:43.187 }, 00:13:43.187 { 00:13:43.187 "name": null, 00:13:43.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.188 "is_configured": false, 00:13:43.188 "data_offset": 0, 00:13:43.188 "data_size": 63488 00:13:43.188 }, 00:13:43.188 { 00:13:43.188 "name": "BaseBdev3", 00:13:43.188 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:43.188 "is_configured": true, 00:13:43.188 "data_offset": 2048, 00:13:43.188 "data_size": 63488 00:13:43.188 }, 00:13:43.188 { 00:13:43.188 "name": "BaseBdev4", 00:13:43.188 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:43.188 "is_configured": true, 00:13:43.188 "data_offset": 2048, 00:13:43.188 "data_size": 63488 00:13:43.188 } 00:13:43.188 ] 00:13:43.188 }' 00:13:43.188 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.188 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.188 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.188 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.188 23:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.448 [2024-12-06 23:47:54.966477] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:43.448 [2024-12-06 23:47:54.966541] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:43.448 [2024-12-06 23:47:54.966667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.026 "name": "raid_bdev1", 00:13:44.026 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:44.026 "strip_size_kb": 0, 00:13:44.026 "state": "online", 00:13:44.026 "raid_level": "raid1", 00:13:44.026 "superblock": true, 00:13:44.026 "num_base_bdevs": 4, 00:13:44.026 "num_base_bdevs_discovered": 3, 00:13:44.026 "num_base_bdevs_operational": 3, 00:13:44.026 "base_bdevs_list": [ 00:13:44.026 { 00:13:44.026 "name": "spare", 00:13:44.026 "uuid": "7c4b4030-4869-5fb7-98c2-74512a5f3350", 00:13:44.026 "is_configured": true, 00:13:44.026 "data_offset": 2048, 00:13:44.026 "data_size": 63488 00:13:44.026 }, 00:13:44.026 { 00:13:44.026 "name": null, 00:13:44.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.026 "is_configured": false, 00:13:44.026 "data_offset": 0, 00:13:44.026 "data_size": 63488 00:13:44.026 }, 00:13:44.026 { 00:13:44.026 "name": "BaseBdev3", 00:13:44.026 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:44.026 "is_configured": true, 00:13:44.026 "data_offset": 2048, 00:13:44.026 "data_size": 63488 00:13:44.026 }, 00:13:44.026 { 00:13:44.026 "name": "BaseBdev4", 00:13:44.026 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:44.026 "is_configured": true, 00:13:44.026 "data_offset": 2048, 00:13:44.026 "data_size": 63488 00:13:44.026 } 00:13:44.026 ] 00:13:44.026 }' 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:44.026 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.285 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.285 "name": "raid_bdev1", 00:13:44.285 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:44.285 "strip_size_kb": 0, 00:13:44.285 "state": "online", 00:13:44.285 "raid_level": "raid1", 00:13:44.285 "superblock": true, 00:13:44.285 "num_base_bdevs": 4, 00:13:44.285 "num_base_bdevs_discovered": 3, 00:13:44.285 "num_base_bdevs_operational": 3, 00:13:44.285 "base_bdevs_list": [ 00:13:44.285 { 00:13:44.285 "name": "spare", 00:13:44.286 "uuid": "7c4b4030-4869-5fb7-98c2-74512a5f3350", 00:13:44.286 "is_configured": true, 00:13:44.286 "data_offset": 2048, 00:13:44.286 "data_size": 63488 00:13:44.286 }, 00:13:44.286 { 00:13:44.286 "name": null, 00:13:44.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.286 "is_configured": false, 00:13:44.286 "data_offset": 0, 00:13:44.286 "data_size": 63488 00:13:44.286 }, 00:13:44.286 { 00:13:44.286 "name": "BaseBdev3", 00:13:44.286 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:44.286 "is_configured": true, 00:13:44.286 "data_offset": 2048, 00:13:44.286 "data_size": 63488 00:13:44.286 }, 00:13:44.286 { 00:13:44.286 "name": "BaseBdev4", 00:13:44.286 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:44.286 "is_configured": true, 00:13:44.286 "data_offset": 2048, 00:13:44.286 "data_size": 63488 00:13:44.286 } 00:13:44.286 ] 00:13:44.286 }' 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.286 "name": "raid_bdev1", 00:13:44.286 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:44.286 "strip_size_kb": 0, 00:13:44.286 "state": "online", 00:13:44.286 "raid_level": "raid1", 00:13:44.286 "superblock": true, 00:13:44.286 "num_base_bdevs": 4, 00:13:44.286 "num_base_bdevs_discovered": 3, 00:13:44.286 "num_base_bdevs_operational": 3, 00:13:44.286 "base_bdevs_list": [ 00:13:44.286 { 00:13:44.286 "name": "spare", 00:13:44.286 "uuid": "7c4b4030-4869-5fb7-98c2-74512a5f3350", 00:13:44.286 "is_configured": true, 00:13:44.286 "data_offset": 2048, 00:13:44.286 "data_size": 63488 00:13:44.286 }, 00:13:44.286 { 00:13:44.286 "name": null, 00:13:44.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.286 "is_configured": false, 00:13:44.286 "data_offset": 0, 00:13:44.286 "data_size": 63488 00:13:44.286 }, 00:13:44.286 { 00:13:44.286 "name": "BaseBdev3", 00:13:44.286 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:44.286 "is_configured": true, 00:13:44.286 "data_offset": 2048, 00:13:44.286 "data_size": 63488 00:13:44.286 }, 00:13:44.286 { 00:13:44.286 "name": "BaseBdev4", 00:13:44.286 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:44.286 "is_configured": true, 00:13:44.286 "data_offset": 2048, 00:13:44.286 "data_size": 63488 00:13:44.286 } 00:13:44.286 ] 00:13:44.286 }' 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.286 23:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.856 [2024-12-06 23:47:56.205216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:44.856 [2024-12-06 23:47:56.205247] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.856 [2024-12-06 23:47:56.205315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.856 [2024-12-06 23:47:56.205385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.856 [2024-12-06 23:47:56.205395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.856 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:45.117 /dev/nbd0 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:45.117 1+0 records in 00:13:45.117 1+0 records out 00:13:45.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052012 s, 7.9 MB/s 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:45.117 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:45.377 /dev/nbd1 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:45.377 1+0 records in 00:13:45.377 1+0 records out 00:13:45.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482369 s, 8.5 MB/s 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.377 23:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:45.635 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:45.635 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:45.635 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:45.635 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.635 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.635 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:45.635 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:45.635 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.635 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.635 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.894 [2024-12-06 23:47:57.357863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.894 [2024-12-06 23:47:57.357917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.894 [2024-12-06 23:47:57.357954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:45.894 [2024-12-06 23:47:57.357964] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.894 [2024-12-06 23:47:57.360126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.894 [2024-12-06 23:47:57.360222] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.894 [2024-12-06 23:47:57.360321] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:45.894 [2024-12-06 23:47:57.360373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.894 [2024-12-06 23:47:57.360506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.894 [2024-12-06 23:47:57.360589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:45.894 spare 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.894 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.153 [2024-12-06 23:47:57.460476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:46.153 [2024-12-06 23:47:57.460500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.153 [2024-12-06 23:47:57.460762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:46.153 [2024-12-06 23:47:57.460944] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:46.153 [2024-12-06 23:47:57.460962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:46.153 [2024-12-06 23:47:57.461119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.153 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.153 "name": "raid_bdev1", 00:13:46.153 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:46.154 "strip_size_kb": 0, 00:13:46.154 "state": "online", 00:13:46.154 "raid_level": "raid1", 00:13:46.154 "superblock": true, 00:13:46.154 "num_base_bdevs": 4, 00:13:46.154 "num_base_bdevs_discovered": 3, 00:13:46.154 "num_base_bdevs_operational": 3, 00:13:46.154 "base_bdevs_list": [ 00:13:46.154 { 00:13:46.154 "name": "spare", 00:13:46.154 "uuid": "7c4b4030-4869-5fb7-98c2-74512a5f3350", 00:13:46.154 "is_configured": true, 00:13:46.154 "data_offset": 2048, 00:13:46.154 "data_size": 63488 00:13:46.154 }, 00:13:46.154 { 00:13:46.154 "name": null, 00:13:46.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.154 "is_configured": false, 00:13:46.154 "data_offset": 2048, 00:13:46.154 "data_size": 63488 00:13:46.154 }, 00:13:46.154 { 00:13:46.154 "name": "BaseBdev3", 00:13:46.154 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:46.154 "is_configured": true, 00:13:46.154 "data_offset": 2048, 00:13:46.154 "data_size": 63488 00:13:46.154 }, 00:13:46.154 { 00:13:46.154 "name": "BaseBdev4", 00:13:46.154 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:46.154 "is_configured": true, 00:13:46.154 "data_offset": 2048, 00:13:46.154 "data_size": 63488 00:13:46.154 } 00:13:46.154 ] 00:13:46.154 }' 00:13:46.154 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.154 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.412 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.412 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.412 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.412 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.412 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.412 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.412 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.412 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.412 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.412 23:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.412 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.412 "name": "raid_bdev1", 00:13:46.412 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:46.412 "strip_size_kb": 0, 00:13:46.412 "state": "online", 00:13:46.412 "raid_level": "raid1", 00:13:46.412 "superblock": true, 00:13:46.412 "num_base_bdevs": 4, 00:13:46.412 "num_base_bdevs_discovered": 3, 00:13:46.412 "num_base_bdevs_operational": 3, 00:13:46.412 "base_bdevs_list": [ 00:13:46.412 { 00:13:46.412 "name": "spare", 00:13:46.412 "uuid": "7c4b4030-4869-5fb7-98c2-74512a5f3350", 00:13:46.412 "is_configured": true, 00:13:46.412 "data_offset": 2048, 00:13:46.412 "data_size": 63488 00:13:46.412 }, 00:13:46.412 { 00:13:46.412 "name": null, 00:13:46.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.413 "is_configured": false, 00:13:46.413 "data_offset": 2048, 00:13:46.413 "data_size": 63488 00:13:46.413 }, 00:13:46.413 { 00:13:46.413 "name": "BaseBdev3", 00:13:46.413 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:46.413 "is_configured": true, 00:13:46.413 "data_offset": 2048, 00:13:46.413 "data_size": 63488 00:13:46.413 }, 00:13:46.413 { 00:13:46.413 "name": "BaseBdev4", 00:13:46.413 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:46.413 "is_configured": true, 00:13:46.413 "data_offset": 2048, 00:13:46.413 "data_size": 63488 00:13:46.413 } 00:13:46.413 ] 00:13:46.413 }' 00:13:46.413 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.413 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.413 23:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.670 [2024-12-06 23:47:58.072683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.670 "name": "raid_bdev1", 00:13:46.670 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:46.670 "strip_size_kb": 0, 00:13:46.670 "state": "online", 00:13:46.670 "raid_level": "raid1", 00:13:46.670 "superblock": true, 00:13:46.670 "num_base_bdevs": 4, 00:13:46.670 "num_base_bdevs_discovered": 2, 00:13:46.670 "num_base_bdevs_operational": 2, 00:13:46.670 "base_bdevs_list": [ 00:13:46.670 { 00:13:46.670 "name": null, 00:13:46.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.670 "is_configured": false, 00:13:46.670 "data_offset": 0, 00:13:46.670 "data_size": 63488 00:13:46.670 }, 00:13:46.670 { 00:13:46.670 "name": null, 00:13:46.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.670 "is_configured": false, 00:13:46.670 "data_offset": 2048, 00:13:46.670 "data_size": 63488 00:13:46.670 }, 00:13:46.670 { 00:13:46.670 "name": "BaseBdev3", 00:13:46.670 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:46.670 "is_configured": true, 00:13:46.670 "data_offset": 2048, 00:13:46.670 "data_size": 63488 00:13:46.670 }, 00:13:46.670 { 00:13:46.670 "name": "BaseBdev4", 00:13:46.670 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:46.670 "is_configured": true, 00:13:46.670 "data_offset": 2048, 00:13:46.670 "data_size": 63488 00:13:46.670 } 00:13:46.670 ] 00:13:46.670 }' 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.670 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.237 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:47.237 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.237 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.237 [2024-12-06 23:47:58.559829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.237 [2024-12-06 23:47:58.559992] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:47.237 [2024-12-06 23:47:58.560006] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:47.237 [2024-12-06 23:47:58.560043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.237 [2024-12-06 23:47:58.573583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:47.237 23:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.237 23:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:47.237 [2024-12-06 23:47:58.575371] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:48.172 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.172 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.172 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.172 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.172 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.172 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.172 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.172 23:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.172 23:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.172 23:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.172 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.172 "name": "raid_bdev1", 00:13:48.172 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:48.172 "strip_size_kb": 0, 00:13:48.172 "state": "online", 00:13:48.172 "raid_level": "raid1", 00:13:48.172 "superblock": true, 00:13:48.172 "num_base_bdevs": 4, 00:13:48.172 "num_base_bdevs_discovered": 3, 00:13:48.172 "num_base_bdevs_operational": 3, 00:13:48.172 "process": { 00:13:48.172 "type": "rebuild", 00:13:48.172 "target": "spare", 00:13:48.172 "progress": { 00:13:48.172 "blocks": 20480, 00:13:48.172 "percent": 32 00:13:48.172 } 00:13:48.172 }, 00:13:48.172 "base_bdevs_list": [ 00:13:48.172 { 00:13:48.172 "name": "spare", 00:13:48.172 "uuid": "7c4b4030-4869-5fb7-98c2-74512a5f3350", 00:13:48.172 "is_configured": true, 00:13:48.172 "data_offset": 2048, 00:13:48.172 "data_size": 63488 00:13:48.172 }, 00:13:48.172 { 00:13:48.172 "name": null, 00:13:48.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.172 "is_configured": false, 00:13:48.172 "data_offset": 2048, 00:13:48.172 "data_size": 63488 00:13:48.172 }, 00:13:48.173 { 00:13:48.173 "name": "BaseBdev3", 00:13:48.173 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:48.173 "is_configured": true, 00:13:48.173 "data_offset": 2048, 00:13:48.173 "data_size": 63488 00:13:48.173 }, 00:13:48.173 { 00:13:48.173 "name": "BaseBdev4", 00:13:48.173 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:48.173 "is_configured": true, 00:13:48.173 "data_offset": 2048, 00:13:48.173 "data_size": 63488 00:13:48.173 } 00:13:48.173 ] 00:13:48.173 }' 00:13:48.173 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.173 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.173 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.434 [2024-12-06 23:47:59.743037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.434 [2024-12-06 23:47:59.779965] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:48.434 [2024-12-06 23:47:59.780019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.434 [2024-12-06 23:47:59.780053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.434 [2024-12-06 23:47:59.780059] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.434 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.434 "name": "raid_bdev1", 00:13:48.434 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:48.434 "strip_size_kb": 0, 00:13:48.434 "state": "online", 00:13:48.434 "raid_level": "raid1", 00:13:48.434 "superblock": true, 00:13:48.434 "num_base_bdevs": 4, 00:13:48.434 "num_base_bdevs_discovered": 2, 00:13:48.434 "num_base_bdevs_operational": 2, 00:13:48.434 "base_bdevs_list": [ 00:13:48.434 { 00:13:48.434 "name": null, 00:13:48.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.434 "is_configured": false, 00:13:48.434 "data_offset": 0, 00:13:48.434 "data_size": 63488 00:13:48.434 }, 00:13:48.434 { 00:13:48.434 "name": null, 00:13:48.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.434 "is_configured": false, 00:13:48.434 "data_offset": 2048, 00:13:48.434 "data_size": 63488 00:13:48.434 }, 00:13:48.434 { 00:13:48.434 "name": "BaseBdev3", 00:13:48.434 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:48.435 "is_configured": true, 00:13:48.435 "data_offset": 2048, 00:13:48.435 "data_size": 63488 00:13:48.435 }, 00:13:48.435 { 00:13:48.435 "name": "BaseBdev4", 00:13:48.435 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:48.435 "is_configured": true, 00:13:48.435 "data_offset": 2048, 00:13:48.435 "data_size": 63488 00:13:48.435 } 00:13:48.435 ] 00:13:48.435 }' 00:13:48.435 23:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.435 23:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.694 23:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:48.694 23:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.694 23:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.694 [2024-12-06 23:48:00.247729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:48.694 [2024-12-06 23:48:00.247858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.694 [2024-12-06 23:48:00.247900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:48.694 [2024-12-06 23:48:00.247926] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.694 [2024-12-06 23:48:00.248393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.694 [2024-12-06 23:48:00.248453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:48.694 [2024-12-06 23:48:00.248555] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:48.694 [2024-12-06 23:48:00.248594] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:48.694 [2024-12-06 23:48:00.248636] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:48.694 [2024-12-06 23:48:00.248724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.954 [2024-12-06 23:48:00.261696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:48.954 spare 00:13:48.954 23:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.954 [2024-12-06 23:48:00.263555] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:48.954 23:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.891 "name": "raid_bdev1", 00:13:49.891 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:49.891 "strip_size_kb": 0, 00:13:49.891 "state": "online", 00:13:49.891 "raid_level": "raid1", 00:13:49.891 "superblock": true, 00:13:49.891 "num_base_bdevs": 4, 00:13:49.891 "num_base_bdevs_discovered": 3, 00:13:49.891 "num_base_bdevs_operational": 3, 00:13:49.891 "process": { 00:13:49.891 "type": "rebuild", 00:13:49.891 "target": "spare", 00:13:49.891 "progress": { 00:13:49.891 "blocks": 20480, 00:13:49.891 "percent": 32 00:13:49.891 } 00:13:49.891 }, 00:13:49.891 "base_bdevs_list": [ 00:13:49.891 { 00:13:49.891 "name": "spare", 00:13:49.891 "uuid": "7c4b4030-4869-5fb7-98c2-74512a5f3350", 00:13:49.891 "is_configured": true, 00:13:49.891 "data_offset": 2048, 00:13:49.891 "data_size": 63488 00:13:49.891 }, 00:13:49.891 { 00:13:49.891 "name": null, 00:13:49.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.891 "is_configured": false, 00:13:49.891 "data_offset": 2048, 00:13:49.891 "data_size": 63488 00:13:49.891 }, 00:13:49.891 { 00:13:49.891 "name": "BaseBdev3", 00:13:49.891 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:49.891 "is_configured": true, 00:13:49.891 "data_offset": 2048, 00:13:49.891 "data_size": 63488 00:13:49.891 }, 00:13:49.891 { 00:13:49.891 "name": "BaseBdev4", 00:13:49.891 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:49.891 "is_configured": true, 00:13:49.891 "data_offset": 2048, 00:13:49.891 "data_size": 63488 00:13:49.891 } 00:13:49.891 ] 00:13:49.891 }' 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.891 23:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.891 [2024-12-06 23:48:01.427759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.151 [2024-12-06 23:48:01.468118] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:50.151 [2024-12-06 23:48:01.468180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.151 [2024-12-06 23:48:01.468195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.151 [2024-12-06 23:48:01.468203] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.151 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.151 "name": "raid_bdev1", 00:13:50.151 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:50.152 "strip_size_kb": 0, 00:13:50.152 "state": "online", 00:13:50.152 "raid_level": "raid1", 00:13:50.152 "superblock": true, 00:13:50.152 "num_base_bdevs": 4, 00:13:50.152 "num_base_bdevs_discovered": 2, 00:13:50.152 "num_base_bdevs_operational": 2, 00:13:50.152 "base_bdevs_list": [ 00:13:50.152 { 00:13:50.152 "name": null, 00:13:50.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.152 "is_configured": false, 00:13:50.152 "data_offset": 0, 00:13:50.152 "data_size": 63488 00:13:50.152 }, 00:13:50.152 { 00:13:50.152 "name": null, 00:13:50.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.152 "is_configured": false, 00:13:50.152 "data_offset": 2048, 00:13:50.152 "data_size": 63488 00:13:50.152 }, 00:13:50.152 { 00:13:50.152 "name": "BaseBdev3", 00:13:50.152 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:50.152 "is_configured": true, 00:13:50.152 "data_offset": 2048, 00:13:50.152 "data_size": 63488 00:13:50.152 }, 00:13:50.152 { 00:13:50.152 "name": "BaseBdev4", 00:13:50.152 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:50.152 "is_configured": true, 00:13:50.152 "data_offset": 2048, 00:13:50.152 "data_size": 63488 00:13:50.152 } 00:13:50.152 ] 00:13:50.152 }' 00:13:50.152 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.152 23:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.720 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.720 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.720 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.720 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.720 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.720 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.720 23:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.720 23:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.720 23:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.720 "name": "raid_bdev1", 00:13:50.720 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:50.720 "strip_size_kb": 0, 00:13:50.720 "state": "online", 00:13:50.720 "raid_level": "raid1", 00:13:50.720 "superblock": true, 00:13:50.720 "num_base_bdevs": 4, 00:13:50.720 "num_base_bdevs_discovered": 2, 00:13:50.720 "num_base_bdevs_operational": 2, 00:13:50.720 "base_bdevs_list": [ 00:13:50.720 { 00:13:50.720 "name": null, 00:13:50.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.720 "is_configured": false, 00:13:50.720 "data_offset": 0, 00:13:50.720 "data_size": 63488 00:13:50.720 }, 00:13:50.720 { 00:13:50.720 "name": null, 00:13:50.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.720 "is_configured": false, 00:13:50.720 "data_offset": 2048, 00:13:50.720 "data_size": 63488 00:13:50.720 }, 00:13:50.720 { 00:13:50.720 "name": "BaseBdev3", 00:13:50.720 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:50.720 "is_configured": true, 00:13:50.720 "data_offset": 2048, 00:13:50.720 "data_size": 63488 00:13:50.720 }, 00:13:50.720 { 00:13:50.720 "name": "BaseBdev4", 00:13:50.720 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:50.720 "is_configured": true, 00:13:50.720 "data_offset": 2048, 00:13:50.720 "data_size": 63488 00:13:50.720 } 00:13:50.720 ] 00:13:50.720 }' 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.720 23:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.721 [2024-12-06 23:48:02.127880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:50.721 [2024-12-06 23:48:02.128001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.721 [2024-12-06 23:48:02.128025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:50.721 [2024-12-06 23:48:02.128036] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.721 [2024-12-06 23:48:02.128482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.721 [2024-12-06 23:48:02.128503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:50.721 [2024-12-06 23:48:02.128570] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:50.721 [2024-12-06 23:48:02.128585] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:50.721 [2024-12-06 23:48:02.128594] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:50.721 [2024-12-06 23:48:02.128617] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:50.721 BaseBdev1 00:13:50.721 23:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.721 23:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.661 "name": "raid_bdev1", 00:13:51.661 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:51.661 "strip_size_kb": 0, 00:13:51.661 "state": "online", 00:13:51.661 "raid_level": "raid1", 00:13:51.661 "superblock": true, 00:13:51.661 "num_base_bdevs": 4, 00:13:51.661 "num_base_bdevs_discovered": 2, 00:13:51.661 "num_base_bdevs_operational": 2, 00:13:51.661 "base_bdevs_list": [ 00:13:51.661 { 00:13:51.661 "name": null, 00:13:51.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.661 "is_configured": false, 00:13:51.661 "data_offset": 0, 00:13:51.661 "data_size": 63488 00:13:51.661 }, 00:13:51.661 { 00:13:51.661 "name": null, 00:13:51.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.661 "is_configured": false, 00:13:51.661 "data_offset": 2048, 00:13:51.661 "data_size": 63488 00:13:51.661 }, 00:13:51.661 { 00:13:51.661 "name": "BaseBdev3", 00:13:51.661 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:51.661 "is_configured": true, 00:13:51.661 "data_offset": 2048, 00:13:51.661 "data_size": 63488 00:13:51.661 }, 00:13:51.661 { 00:13:51.661 "name": "BaseBdev4", 00:13:51.661 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:51.661 "is_configured": true, 00:13:51.661 "data_offset": 2048, 00:13:51.661 "data_size": 63488 00:13:51.661 } 00:13:51.661 ] 00:13:51.661 }' 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.661 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.232 "name": "raid_bdev1", 00:13:52.232 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:52.232 "strip_size_kb": 0, 00:13:52.232 "state": "online", 00:13:52.232 "raid_level": "raid1", 00:13:52.232 "superblock": true, 00:13:52.232 "num_base_bdevs": 4, 00:13:52.232 "num_base_bdevs_discovered": 2, 00:13:52.232 "num_base_bdevs_operational": 2, 00:13:52.232 "base_bdevs_list": [ 00:13:52.232 { 00:13:52.232 "name": null, 00:13:52.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.232 "is_configured": false, 00:13:52.232 "data_offset": 0, 00:13:52.232 "data_size": 63488 00:13:52.232 }, 00:13:52.232 { 00:13:52.232 "name": null, 00:13:52.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.232 "is_configured": false, 00:13:52.232 "data_offset": 2048, 00:13:52.232 "data_size": 63488 00:13:52.232 }, 00:13:52.232 { 00:13:52.232 "name": "BaseBdev3", 00:13:52.232 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:52.232 "is_configured": true, 00:13:52.232 "data_offset": 2048, 00:13:52.232 "data_size": 63488 00:13:52.232 }, 00:13:52.232 { 00:13:52.232 "name": "BaseBdev4", 00:13:52.232 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:52.232 "is_configured": true, 00:13:52.232 "data_offset": 2048, 00:13:52.232 "data_size": 63488 00:13:52.232 } 00:13:52.232 ] 00:13:52.232 }' 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.232 [2024-12-06 23:48:03.753154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.232 [2024-12-06 23:48:03.753333] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:52.232 [2024-12-06 23:48:03.753348] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:52.232 request: 00:13:52.232 { 00:13:52.232 "base_bdev": "BaseBdev1", 00:13:52.232 "raid_bdev": "raid_bdev1", 00:13:52.232 "method": "bdev_raid_add_base_bdev", 00:13:52.232 "req_id": 1 00:13:52.232 } 00:13:52.232 Got JSON-RPC error response 00:13:52.232 response: 00:13:52.232 { 00:13:52.232 "code": -22, 00:13:52.232 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:52.232 } 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.232 23:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.615 "name": "raid_bdev1", 00:13:53.615 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:53.615 "strip_size_kb": 0, 00:13:53.615 "state": "online", 00:13:53.615 "raid_level": "raid1", 00:13:53.615 "superblock": true, 00:13:53.615 "num_base_bdevs": 4, 00:13:53.615 "num_base_bdevs_discovered": 2, 00:13:53.615 "num_base_bdevs_operational": 2, 00:13:53.615 "base_bdevs_list": [ 00:13:53.615 { 00:13:53.615 "name": null, 00:13:53.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.615 "is_configured": false, 00:13:53.615 "data_offset": 0, 00:13:53.615 "data_size": 63488 00:13:53.615 }, 00:13:53.615 { 00:13:53.615 "name": null, 00:13:53.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.615 "is_configured": false, 00:13:53.615 "data_offset": 2048, 00:13:53.615 "data_size": 63488 00:13:53.615 }, 00:13:53.615 { 00:13:53.615 "name": "BaseBdev3", 00:13:53.615 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:53.615 "is_configured": true, 00:13:53.615 "data_offset": 2048, 00:13:53.615 "data_size": 63488 00:13:53.615 }, 00:13:53.615 { 00:13:53.615 "name": "BaseBdev4", 00:13:53.615 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:53.615 "is_configured": true, 00:13:53.615 "data_offset": 2048, 00:13:53.615 "data_size": 63488 00:13:53.615 } 00:13:53.615 ] 00:13:53.615 }' 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.615 23:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.875 "name": "raid_bdev1", 00:13:53.875 "uuid": "fdff71e5-64e2-4c9a-8dcc-c42639679a03", 00:13:53.875 "strip_size_kb": 0, 00:13:53.875 "state": "online", 00:13:53.875 "raid_level": "raid1", 00:13:53.875 "superblock": true, 00:13:53.875 "num_base_bdevs": 4, 00:13:53.875 "num_base_bdevs_discovered": 2, 00:13:53.875 "num_base_bdevs_operational": 2, 00:13:53.875 "base_bdevs_list": [ 00:13:53.875 { 00:13:53.875 "name": null, 00:13:53.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.875 "is_configured": false, 00:13:53.875 "data_offset": 0, 00:13:53.875 "data_size": 63488 00:13:53.875 }, 00:13:53.875 { 00:13:53.875 "name": null, 00:13:53.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.875 "is_configured": false, 00:13:53.875 "data_offset": 2048, 00:13:53.875 "data_size": 63488 00:13:53.875 }, 00:13:53.875 { 00:13:53.875 "name": "BaseBdev3", 00:13:53.875 "uuid": "3d942bf2-9912-53e4-a90e-96cf52eb70fc", 00:13:53.875 "is_configured": true, 00:13:53.875 "data_offset": 2048, 00:13:53.875 "data_size": 63488 00:13:53.875 }, 00:13:53.875 { 00:13:53.875 "name": "BaseBdev4", 00:13:53.875 "uuid": "bbab3fc2-d490-5385-9529-34a22b9c90ed", 00:13:53.875 "is_configured": true, 00:13:53.875 "data_offset": 2048, 00:13:53.875 "data_size": 63488 00:13:53.875 } 00:13:53.875 ] 00:13:53.875 }' 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77919 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77919 ']' 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77919 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77919 00:13:53.875 killing process with pid 77919 00:13:53.875 Received shutdown signal, test time was about 60.000000 seconds 00:13:53.875 00:13:53.875 Latency(us) 00:13:53.875 [2024-12-06T23:48:05.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.875 [2024-12-06T23:48:05.438Z] =================================================================================================================== 00:13:53.875 [2024-12-06T23:48:05.438Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77919' 00:13:53.875 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77919 00:13:53.875 [2024-12-06 23:48:05.374851] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.875 [2024-12-06 23:48:05.374956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.875 [2024-12-06 23:48:05.375015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.875 [2024-12-06 23:48:05.375025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, sta 23:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77919 00:13:53.875 te offline 00:13:54.444 [2024-12-06 23:48:05.832898] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:55.383 23:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:55.383 00:13:55.383 real 0m25.038s 00:13:55.383 user 0m29.862s 00:13:55.383 sys 0m3.980s 00:13:55.383 23:48:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.383 ************************************ 00:13:55.383 END TEST raid_rebuild_test_sb 00:13:55.383 ************************************ 00:13:55.383 23:48:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.644 23:48:06 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:55.644 23:48:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:55.644 23:48:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.644 23:48:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:55.644 ************************************ 00:13:55.644 START TEST raid_rebuild_test_io 00:13:55.644 ************************************ 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78667 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78667 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78667 ']' 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.644 23:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.644 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:55.644 Zero copy mechanism will not be used. 00:13:55.644 [2024-12-06 23:48:07.077370] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:13:55.644 [2024-12-06 23:48:07.077543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78667 ] 00:13:55.905 [2024-12-06 23:48:07.255271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.905 [2024-12-06 23:48:07.360008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.165 [2024-12-06 23:48:07.540879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.165 [2024-12-06 23:48:07.540966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.425 BaseBdev1_malloc 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.425 [2024-12-06 23:48:07.917401] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:56.425 [2024-12-06 23:48:07.917472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.425 [2024-12-06 23:48:07.917493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:56.425 [2024-12-06 23:48:07.917503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.425 [2024-12-06 23:48:07.919574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.425 [2024-12-06 23:48:07.919613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:56.425 BaseBdev1 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.425 BaseBdev2_malloc 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.425 [2024-12-06 23:48:07.967451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:56.425 [2024-12-06 23:48:07.967517] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.425 [2024-12-06 23:48:07.967556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:56.425 [2024-12-06 23:48:07.967567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.425 [2024-12-06 23:48:07.969540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.425 [2024-12-06 23:48:07.969578] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:56.425 BaseBdev2 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.425 23:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.686 BaseBdev3_malloc 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.686 [2024-12-06 23:48:08.053713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:56.686 [2024-12-06 23:48:08.053863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.686 [2024-12-06 23:48:08.053888] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:56.686 [2024-12-06 23:48:08.053899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.686 [2024-12-06 23:48:08.055884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.686 [2024-12-06 23:48:08.055926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:56.686 BaseBdev3 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.686 BaseBdev4_malloc 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.686 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.686 [2024-12-06 23:48:08.104040] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:56.686 [2024-12-06 23:48:08.104190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.686 [2024-12-06 23:48:08.104214] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:56.686 [2024-12-06 23:48:08.104224] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.686 [2024-12-06 23:48:08.106231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.686 [2024-12-06 23:48:08.106272] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:56.686 BaseBdev4 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.687 spare_malloc 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.687 spare_delay 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.687 [2024-12-06 23:48:08.168826] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:56.687 [2024-12-06 23:48:08.168882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.687 [2024-12-06 23:48:08.168899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:56.687 [2024-12-06 23:48:08.168910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.687 [2024-12-06 23:48:08.170860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.687 [2024-12-06 23:48:08.170901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:56.687 spare 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.687 [2024-12-06 23:48:08.180855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.687 [2024-12-06 23:48:08.182562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.687 [2024-12-06 23:48:08.182625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.687 [2024-12-06 23:48:08.182687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:56.687 [2024-12-06 23:48:08.182763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:56.687 [2024-12-06 23:48:08.182776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:56.687 [2024-12-06 23:48:08.183012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:56.687 [2024-12-06 23:48:08.183182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:56.687 [2024-12-06 23:48:08.183193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:56.687 [2024-12-06 23:48:08.183342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.687 "name": "raid_bdev1", 00:13:56.687 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:13:56.687 "strip_size_kb": 0, 00:13:56.687 "state": "online", 00:13:56.687 "raid_level": "raid1", 00:13:56.687 "superblock": false, 00:13:56.687 "num_base_bdevs": 4, 00:13:56.687 "num_base_bdevs_discovered": 4, 00:13:56.687 "num_base_bdevs_operational": 4, 00:13:56.687 "base_bdevs_list": [ 00:13:56.687 { 00:13:56.687 "name": "BaseBdev1", 00:13:56.687 "uuid": "44711c66-0b95-5cc7-a8af-04ee2d612172", 00:13:56.687 "is_configured": true, 00:13:56.687 "data_offset": 0, 00:13:56.687 "data_size": 65536 00:13:56.687 }, 00:13:56.687 { 00:13:56.687 "name": "BaseBdev2", 00:13:56.687 "uuid": "3cb98f00-16c5-5fbf-bfd3-6dff6fe8ff27", 00:13:56.687 "is_configured": true, 00:13:56.687 "data_offset": 0, 00:13:56.687 "data_size": 65536 00:13:56.687 }, 00:13:56.687 { 00:13:56.687 "name": "BaseBdev3", 00:13:56.687 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:13:56.687 "is_configured": true, 00:13:56.687 "data_offset": 0, 00:13:56.687 "data_size": 65536 00:13:56.687 }, 00:13:56.687 { 00:13:56.687 "name": "BaseBdev4", 00:13:56.687 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:13:56.687 "is_configured": true, 00:13:56.687 "data_offset": 0, 00:13:56.687 "data_size": 65536 00:13:56.687 } 00:13:56.687 ] 00:13:56.687 }' 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.687 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.259 [2024-12-06 23:48:08.664367] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.259 [2024-12-06 23:48:08.727952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.259 "name": "raid_bdev1", 00:13:57.259 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:13:57.259 "strip_size_kb": 0, 00:13:57.259 "state": "online", 00:13:57.259 "raid_level": "raid1", 00:13:57.259 "superblock": false, 00:13:57.259 "num_base_bdevs": 4, 00:13:57.259 "num_base_bdevs_discovered": 3, 00:13:57.259 "num_base_bdevs_operational": 3, 00:13:57.259 "base_bdevs_list": [ 00:13:57.259 { 00:13:57.259 "name": null, 00:13:57.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.259 "is_configured": false, 00:13:57.259 "data_offset": 0, 00:13:57.259 "data_size": 65536 00:13:57.259 }, 00:13:57.259 { 00:13:57.259 "name": "BaseBdev2", 00:13:57.259 "uuid": "3cb98f00-16c5-5fbf-bfd3-6dff6fe8ff27", 00:13:57.259 "is_configured": true, 00:13:57.259 "data_offset": 0, 00:13:57.259 "data_size": 65536 00:13:57.259 }, 00:13:57.259 { 00:13:57.259 "name": "BaseBdev3", 00:13:57.259 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:13:57.259 "is_configured": true, 00:13:57.259 "data_offset": 0, 00:13:57.259 "data_size": 65536 00:13:57.259 }, 00:13:57.259 { 00:13:57.259 "name": "BaseBdev4", 00:13:57.259 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:13:57.259 "is_configured": true, 00:13:57.259 "data_offset": 0, 00:13:57.259 "data_size": 65536 00:13:57.259 } 00:13:57.259 ] 00:13:57.259 }' 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.259 23:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.259 [2024-12-06 23:48:08.804215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:57.259 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:57.259 Zero copy mechanism will not be used. 00:13:57.259 Running I/O for 60 seconds... 00:13:57.829 23:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:57.829 23:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.829 23:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.829 [2024-12-06 23:48:09.178874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.829 23:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.829 23:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:57.829 [2024-12-06 23:48:09.232094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:57.829 [2024-12-06 23:48:09.234001] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:57.829 [2024-12-06 23:48:09.336033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:57.829 [2024-12-06 23:48:09.336636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:58.090 [2024-12-06 23:48:09.449361] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:58.090 [2024-12-06 23:48:09.450132] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:58.351 [2024-12-06 23:48:09.797121] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:58.611 165.00 IOPS, 495.00 MiB/s [2024-12-06T23:48:10.174Z] [2024-12-06 23:48:10.018086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:58.611 [2024-12-06 23:48:10.018726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.871 "name": "raid_bdev1", 00:13:58.871 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:13:58.871 "strip_size_kb": 0, 00:13:58.871 "state": "online", 00:13:58.871 "raid_level": "raid1", 00:13:58.871 "superblock": false, 00:13:58.871 "num_base_bdevs": 4, 00:13:58.871 "num_base_bdevs_discovered": 4, 00:13:58.871 "num_base_bdevs_operational": 4, 00:13:58.871 "process": { 00:13:58.871 "type": "rebuild", 00:13:58.871 "target": "spare", 00:13:58.871 "progress": { 00:13:58.871 "blocks": 12288, 00:13:58.871 "percent": 18 00:13:58.871 } 00:13:58.871 }, 00:13:58.871 "base_bdevs_list": [ 00:13:58.871 { 00:13:58.871 "name": "spare", 00:13:58.871 "uuid": "8cca11f8-1a3c-521c-a937-40415370a3bc", 00:13:58.871 "is_configured": true, 00:13:58.871 "data_offset": 0, 00:13:58.871 "data_size": 65536 00:13:58.871 }, 00:13:58.871 { 00:13:58.871 "name": "BaseBdev2", 00:13:58.871 "uuid": "3cb98f00-16c5-5fbf-bfd3-6dff6fe8ff27", 00:13:58.871 "is_configured": true, 00:13:58.871 "data_offset": 0, 00:13:58.871 "data_size": 65536 00:13:58.871 }, 00:13:58.871 { 00:13:58.871 "name": "BaseBdev3", 00:13:58.871 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:13:58.871 "is_configured": true, 00:13:58.871 "data_offset": 0, 00:13:58.871 "data_size": 65536 00:13:58.871 }, 00:13:58.871 { 00:13:58.871 "name": "BaseBdev4", 00:13:58.871 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:13:58.871 "is_configured": true, 00:13:58.871 "data_offset": 0, 00:13:58.871 "data_size": 65536 00:13:58.871 } 00:13:58.871 ] 00:13:58.871 }' 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.871 23:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.871 [2024-12-06 23:48:10.392914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:59.131 [2024-12-06 23:48:10.454373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:59.132 [2024-12-06 23:48:10.561945] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:59.132 [2024-12-06 23:48:10.572005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.132 [2024-12-06 23:48:10.572108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:59.132 [2024-12-06 23:48:10.572140] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:59.132 [2024-12-06 23:48:10.594243] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.132 "name": "raid_bdev1", 00:13:59.132 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:13:59.132 "strip_size_kb": 0, 00:13:59.132 "state": "online", 00:13:59.132 "raid_level": "raid1", 00:13:59.132 "superblock": false, 00:13:59.132 "num_base_bdevs": 4, 00:13:59.132 "num_base_bdevs_discovered": 3, 00:13:59.132 "num_base_bdevs_operational": 3, 00:13:59.132 "base_bdevs_list": [ 00:13:59.132 { 00:13:59.132 "name": null, 00:13:59.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.132 "is_configured": false, 00:13:59.132 "data_offset": 0, 00:13:59.132 "data_size": 65536 00:13:59.132 }, 00:13:59.132 { 00:13:59.132 "name": "BaseBdev2", 00:13:59.132 "uuid": "3cb98f00-16c5-5fbf-bfd3-6dff6fe8ff27", 00:13:59.132 "is_configured": true, 00:13:59.132 "data_offset": 0, 00:13:59.132 "data_size": 65536 00:13:59.132 }, 00:13:59.132 { 00:13:59.132 "name": "BaseBdev3", 00:13:59.132 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:13:59.132 "is_configured": true, 00:13:59.132 "data_offset": 0, 00:13:59.132 "data_size": 65536 00:13:59.132 }, 00:13:59.132 { 00:13:59.132 "name": "BaseBdev4", 00:13:59.132 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:13:59.132 "is_configured": true, 00:13:59.132 "data_offset": 0, 00:13:59.132 "data_size": 65536 00:13:59.132 } 00:13:59.132 ] 00:13:59.132 }' 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.132 23:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.653 127.00 IOPS, 381.00 MiB/s [2024-12-06T23:48:11.216Z] 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.653 "name": "raid_bdev1", 00:13:59.653 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:13:59.653 "strip_size_kb": 0, 00:13:59.653 "state": "online", 00:13:59.653 "raid_level": "raid1", 00:13:59.653 "superblock": false, 00:13:59.653 "num_base_bdevs": 4, 00:13:59.653 "num_base_bdevs_discovered": 3, 00:13:59.653 "num_base_bdevs_operational": 3, 00:13:59.653 "base_bdevs_list": [ 00:13:59.653 { 00:13:59.653 "name": null, 00:13:59.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.653 "is_configured": false, 00:13:59.653 "data_offset": 0, 00:13:59.653 "data_size": 65536 00:13:59.653 }, 00:13:59.653 { 00:13:59.653 "name": "BaseBdev2", 00:13:59.653 "uuid": "3cb98f00-16c5-5fbf-bfd3-6dff6fe8ff27", 00:13:59.653 "is_configured": true, 00:13:59.653 "data_offset": 0, 00:13:59.653 "data_size": 65536 00:13:59.653 }, 00:13:59.653 { 00:13:59.653 "name": "BaseBdev3", 00:13:59.653 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:13:59.653 "is_configured": true, 00:13:59.653 "data_offset": 0, 00:13:59.653 "data_size": 65536 00:13:59.653 }, 00:13:59.653 { 00:13:59.653 "name": "BaseBdev4", 00:13:59.653 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:13:59.653 "is_configured": true, 00:13:59.653 "data_offset": 0, 00:13:59.653 "data_size": 65536 00:13:59.653 } 00:13:59.653 ] 00:13:59.653 }' 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.653 23:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.653 [2024-12-06 23:48:11.199156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.913 23:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.913 23:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:59.913 [2024-12-06 23:48:11.251138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:59.913 [2024-12-06 23:48:11.253113] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:59.913 [2024-12-06 23:48:11.365800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:59.913 [2024-12-06 23:48:11.367357] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:00.173 [2024-12-06 23:48:11.569052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:00.173 [2024-12-06 23:48:11.569378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:00.433 134.00 IOPS, 402.00 MiB/s [2024-12-06T23:48:11.996Z] [2024-12-06 23:48:11.939876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:00.693 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.693 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.693 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.693 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.693 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.693 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.693 23:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.693 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.693 23:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.953 23:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.953 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.953 "name": "raid_bdev1", 00:14:00.953 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:14:00.953 "strip_size_kb": 0, 00:14:00.953 "state": "online", 00:14:00.953 "raid_level": "raid1", 00:14:00.953 "superblock": false, 00:14:00.953 "num_base_bdevs": 4, 00:14:00.953 "num_base_bdevs_discovered": 4, 00:14:00.953 "num_base_bdevs_operational": 4, 00:14:00.953 "process": { 00:14:00.953 "type": "rebuild", 00:14:00.953 "target": "spare", 00:14:00.953 "progress": { 00:14:00.953 "blocks": 14336, 00:14:00.953 "percent": 21 00:14:00.953 } 00:14:00.953 }, 00:14:00.953 "base_bdevs_list": [ 00:14:00.953 { 00:14:00.953 "name": "spare", 00:14:00.953 "uuid": "8cca11f8-1a3c-521c-a937-40415370a3bc", 00:14:00.953 "is_configured": true, 00:14:00.953 "data_offset": 0, 00:14:00.953 "data_size": 65536 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "name": "BaseBdev2", 00:14:00.953 "uuid": "3cb98f00-16c5-5fbf-bfd3-6dff6fe8ff27", 00:14:00.953 "is_configured": true, 00:14:00.953 "data_offset": 0, 00:14:00.953 "data_size": 65536 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "name": "BaseBdev3", 00:14:00.953 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:14:00.953 "is_configured": true, 00:14:00.953 "data_offset": 0, 00:14:00.953 "data_size": 65536 00:14:00.953 }, 00:14:00.953 { 00:14:00.953 "name": "BaseBdev4", 00:14:00.953 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:14:00.953 "is_configured": true, 00:14:00.953 "data_offset": 0, 00:14:00.953 "data_size": 65536 00:14:00.953 } 00:14:00.953 ] 00:14:00.953 }' 00:14:00.953 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.953 [2024-12-06 23:48:12.306644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:00.953 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.953 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.953 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.953 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:00.953 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:00.953 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:00.954 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:00.954 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:00.954 23:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.954 23:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.954 [2024-12-06 23:48:12.403192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:01.214 [2024-12-06 23:48:12.536071] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:01.214 [2024-12-06 23:48:12.536162] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.214 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.214 "name": "raid_bdev1", 00:14:01.214 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:14:01.214 "strip_size_kb": 0, 00:14:01.214 "state": "online", 00:14:01.214 "raid_level": "raid1", 00:14:01.214 "superblock": false, 00:14:01.214 "num_base_bdevs": 4, 00:14:01.214 "num_base_bdevs_discovered": 3, 00:14:01.214 "num_base_bdevs_operational": 3, 00:14:01.214 "process": { 00:14:01.214 "type": "rebuild", 00:14:01.214 "target": "spare", 00:14:01.214 "progress": { 00:14:01.214 "blocks": 18432, 00:14:01.214 "percent": 28 00:14:01.214 } 00:14:01.214 }, 00:14:01.214 "base_bdevs_list": [ 00:14:01.214 { 00:14:01.214 "name": "spare", 00:14:01.214 "uuid": "8cca11f8-1a3c-521c-a937-40415370a3bc", 00:14:01.215 "is_configured": true, 00:14:01.215 "data_offset": 0, 00:14:01.215 "data_size": 65536 00:14:01.215 }, 00:14:01.215 { 00:14:01.215 "name": null, 00:14:01.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.215 "is_configured": false, 00:14:01.215 "data_offset": 0, 00:14:01.215 "data_size": 65536 00:14:01.215 }, 00:14:01.215 { 00:14:01.215 "name": "BaseBdev3", 00:14:01.215 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:14:01.215 "is_configured": true, 00:14:01.215 "data_offset": 0, 00:14:01.215 "data_size": 65536 00:14:01.215 }, 00:14:01.215 { 00:14:01.215 "name": "BaseBdev4", 00:14:01.215 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:14:01.215 "is_configured": true, 00:14:01.215 "data_offset": 0, 00:14:01.215 "data_size": 65536 00:14:01.215 } 00:14:01.215 ] 00:14:01.215 }' 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=481 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.215 "name": "raid_bdev1", 00:14:01.215 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:14:01.215 "strip_size_kb": 0, 00:14:01.215 "state": "online", 00:14:01.215 "raid_level": "raid1", 00:14:01.215 "superblock": false, 00:14:01.215 "num_base_bdevs": 4, 00:14:01.215 "num_base_bdevs_discovered": 3, 00:14:01.215 "num_base_bdevs_operational": 3, 00:14:01.215 "process": { 00:14:01.215 "type": "rebuild", 00:14:01.215 "target": "spare", 00:14:01.215 "progress": { 00:14:01.215 "blocks": 20480, 00:14:01.215 "percent": 31 00:14:01.215 } 00:14:01.215 }, 00:14:01.215 "base_bdevs_list": [ 00:14:01.215 { 00:14:01.215 "name": "spare", 00:14:01.215 "uuid": "8cca11f8-1a3c-521c-a937-40415370a3bc", 00:14:01.215 "is_configured": true, 00:14:01.215 "data_offset": 0, 00:14:01.215 "data_size": 65536 00:14:01.215 }, 00:14:01.215 { 00:14:01.215 "name": null, 00:14:01.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.215 "is_configured": false, 00:14:01.215 "data_offset": 0, 00:14:01.215 "data_size": 65536 00:14:01.215 }, 00:14:01.215 { 00:14:01.215 "name": "BaseBdev3", 00:14:01.215 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:14:01.215 "is_configured": true, 00:14:01.215 "data_offset": 0, 00:14:01.215 "data_size": 65536 00:14:01.215 }, 00:14:01.215 { 00:14:01.215 "name": "BaseBdev4", 00:14:01.215 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:14:01.215 "is_configured": true, 00:14:01.215 "data_offset": 0, 00:14:01.215 "data_size": 65536 00:14:01.215 } 00:14:01.215 ] 00:14:01.215 }' 00:14:01.215 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.475 [2024-12-06 23:48:12.777406] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:01.475 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.475 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.475 121.75 IOPS, 365.25 MiB/s [2024-12-06T23:48:13.038Z] 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.475 23:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:01.735 [2024-12-06 23:48:13.132381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:01.995 [2024-12-06 23:48:13.358733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:02.535 107.80 IOPS, 323.40 MiB/s [2024-12-06T23:48:14.098Z] 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.535 [2024-12-06 23:48:13.846901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.535 "name": "raid_bdev1", 00:14:02.535 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:14:02.535 "strip_size_kb": 0, 00:14:02.535 "state": "online", 00:14:02.535 "raid_level": "raid1", 00:14:02.535 "superblock": false, 00:14:02.535 "num_base_bdevs": 4, 00:14:02.535 "num_base_bdevs_discovered": 3, 00:14:02.535 "num_base_bdevs_operational": 3, 00:14:02.535 "process": { 00:14:02.535 "type": "rebuild", 00:14:02.535 "target": "spare", 00:14:02.535 "progress": { 00:14:02.535 "blocks": 32768, 00:14:02.535 "percent": 50 00:14:02.535 } 00:14:02.535 }, 00:14:02.535 "base_bdevs_list": [ 00:14:02.535 { 00:14:02.535 "name": "spare", 00:14:02.535 "uuid": "8cca11f8-1a3c-521c-a937-40415370a3bc", 00:14:02.535 "is_configured": true, 00:14:02.535 "data_offset": 0, 00:14:02.535 "data_size": 65536 00:14:02.535 }, 00:14:02.535 { 00:14:02.535 "name": null, 00:14:02.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.535 "is_configured": false, 00:14:02.535 "data_offset": 0, 00:14:02.535 "data_size": 65536 00:14:02.535 }, 00:14:02.535 { 00:14:02.535 "name": "BaseBdev3", 00:14:02.535 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:14:02.535 "is_configured": true, 00:14:02.535 "data_offset": 0, 00:14:02.535 "data_size": 65536 00:14:02.535 }, 00:14:02.535 { 00:14:02.535 "name": "BaseBdev4", 00:14:02.535 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:14:02.535 "is_configured": true, 00:14:02.535 "data_offset": 0, 00:14:02.535 "data_size": 65536 00:14:02.535 } 00:14:02.535 ] 00:14:02.535 }' 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.535 23:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:02.796 [2024-12-06 23:48:14.175463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:02.796 [2024-12-06 23:48:14.175700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:03.056 [2024-12-06 23:48:14.410367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:03.316 [2024-12-06 23:48:14.630436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:03.575 96.50 IOPS, 289.50 MiB/s [2024-12-06T23:48:15.138Z] 23:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.575 23:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.575 23:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.575 23:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.575 23:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.575 23:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.575 23:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.575 23:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.575 23:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.575 23:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.575 23:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.575 23:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.575 "name": "raid_bdev1", 00:14:03.575 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:14:03.575 "strip_size_kb": 0, 00:14:03.575 "state": "online", 00:14:03.575 "raid_level": "raid1", 00:14:03.575 "superblock": false, 00:14:03.575 "num_base_bdevs": 4, 00:14:03.575 "num_base_bdevs_discovered": 3, 00:14:03.575 "num_base_bdevs_operational": 3, 00:14:03.575 "process": { 00:14:03.575 "type": "rebuild", 00:14:03.575 "target": "spare", 00:14:03.575 "progress": { 00:14:03.575 "blocks": 51200, 00:14:03.575 "percent": 78 00:14:03.575 } 00:14:03.575 }, 00:14:03.575 "base_bdevs_list": [ 00:14:03.575 { 00:14:03.575 "name": "spare", 00:14:03.575 "uuid": "8cca11f8-1a3c-521c-a937-40415370a3bc", 00:14:03.575 "is_configured": true, 00:14:03.575 "data_offset": 0, 00:14:03.575 "data_size": 65536 00:14:03.575 }, 00:14:03.575 { 00:14:03.575 "name": null, 00:14:03.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.575 "is_configured": false, 00:14:03.575 "data_offset": 0, 00:14:03.575 "data_size": 65536 00:14:03.575 }, 00:14:03.575 { 00:14:03.575 "name": "BaseBdev3", 00:14:03.576 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:14:03.576 "is_configured": true, 00:14:03.576 "data_offset": 0, 00:14:03.576 "data_size": 65536 00:14:03.576 }, 00:14:03.576 { 00:14:03.576 "name": "BaseBdev4", 00:14:03.576 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:14:03.576 "is_configured": true, 00:14:03.576 "data_offset": 0, 00:14:03.576 "data_size": 65536 00:14:03.576 } 00:14:03.576 ] 00:14:03.576 }' 00:14:03.576 23:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.576 23:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.576 23:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.576 [2024-12-06 23:48:15.076918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:03.576 23:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.576 23:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:03.834 [2024-12-06 23:48:15.292845] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:03.834 [2024-12-06 23:48:15.293254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:04.093 [2024-12-06 23:48:15.408218] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:04.352 [2024-12-06 23:48:15.737359] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:04.352 89.43 IOPS, 268.29 MiB/s [2024-12-06T23:48:15.915Z] [2024-12-06 23:48:15.842076] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:04.352 [2024-12-06 23:48:15.845776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.611 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.611 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.611 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.611 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.611 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.611 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.611 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.611 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.611 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.611 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.611 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.611 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.611 "name": "raid_bdev1", 00:14:04.611 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:14:04.611 "strip_size_kb": 0, 00:14:04.612 "state": "online", 00:14:04.612 "raid_level": "raid1", 00:14:04.612 "superblock": false, 00:14:04.612 "num_base_bdevs": 4, 00:14:04.612 "num_base_bdevs_discovered": 3, 00:14:04.612 "num_base_bdevs_operational": 3, 00:14:04.612 "base_bdevs_list": [ 00:14:04.612 { 00:14:04.612 "name": "spare", 00:14:04.612 "uuid": "8cca11f8-1a3c-521c-a937-40415370a3bc", 00:14:04.612 "is_configured": true, 00:14:04.612 "data_offset": 0, 00:14:04.612 "data_size": 65536 00:14:04.612 }, 00:14:04.612 { 00:14:04.612 "name": null, 00:14:04.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.612 "is_configured": false, 00:14:04.612 "data_offset": 0, 00:14:04.612 "data_size": 65536 00:14:04.612 }, 00:14:04.612 { 00:14:04.612 "name": "BaseBdev3", 00:14:04.612 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:14:04.612 "is_configured": true, 00:14:04.612 "data_offset": 0, 00:14:04.612 "data_size": 65536 00:14:04.612 }, 00:14:04.612 { 00:14:04.612 "name": "BaseBdev4", 00:14:04.612 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:14:04.612 "is_configured": true, 00:14:04.612 "data_offset": 0, 00:14:04.612 "data_size": 65536 00:14:04.612 } 00:14:04.612 ] 00:14:04.612 }' 00:14:04.612 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.871 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:04.871 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.872 "name": "raid_bdev1", 00:14:04.872 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:14:04.872 "strip_size_kb": 0, 00:14:04.872 "state": "online", 00:14:04.872 "raid_level": "raid1", 00:14:04.872 "superblock": false, 00:14:04.872 "num_base_bdevs": 4, 00:14:04.872 "num_base_bdevs_discovered": 3, 00:14:04.872 "num_base_bdevs_operational": 3, 00:14:04.872 "base_bdevs_list": [ 00:14:04.872 { 00:14:04.872 "name": "spare", 00:14:04.872 "uuid": "8cca11f8-1a3c-521c-a937-40415370a3bc", 00:14:04.872 "is_configured": true, 00:14:04.872 "data_offset": 0, 00:14:04.872 "data_size": 65536 00:14:04.872 }, 00:14:04.872 { 00:14:04.872 "name": null, 00:14:04.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.872 "is_configured": false, 00:14:04.872 "data_offset": 0, 00:14:04.872 "data_size": 65536 00:14:04.872 }, 00:14:04.872 { 00:14:04.872 "name": "BaseBdev3", 00:14:04.872 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:14:04.872 "is_configured": true, 00:14:04.872 "data_offset": 0, 00:14:04.872 "data_size": 65536 00:14:04.872 }, 00:14:04.872 { 00:14:04.872 "name": "BaseBdev4", 00:14:04.872 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:14:04.872 "is_configured": true, 00:14:04.872 "data_offset": 0, 00:14:04.872 "data_size": 65536 00:14:04.872 } 00:14:04.872 ] 00:14:04.872 }' 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.872 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.133 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.133 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.133 "name": "raid_bdev1", 00:14:05.133 "uuid": "b4195ad5-fafc-42dc-8608-0b46051e2439", 00:14:05.133 "strip_size_kb": 0, 00:14:05.133 "state": "online", 00:14:05.133 "raid_level": "raid1", 00:14:05.133 "superblock": false, 00:14:05.133 "num_base_bdevs": 4, 00:14:05.133 "num_base_bdevs_discovered": 3, 00:14:05.133 "num_base_bdevs_operational": 3, 00:14:05.133 "base_bdevs_list": [ 00:14:05.133 { 00:14:05.133 "name": "spare", 00:14:05.133 "uuid": "8cca11f8-1a3c-521c-a937-40415370a3bc", 00:14:05.133 "is_configured": true, 00:14:05.133 "data_offset": 0, 00:14:05.133 "data_size": 65536 00:14:05.133 }, 00:14:05.133 { 00:14:05.133 "name": null, 00:14:05.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.133 "is_configured": false, 00:14:05.133 "data_offset": 0, 00:14:05.133 "data_size": 65536 00:14:05.133 }, 00:14:05.133 { 00:14:05.133 "name": "BaseBdev3", 00:14:05.133 "uuid": "bf62e9b9-ecfe-52c5-9225-5c707285a9c4", 00:14:05.133 "is_configured": true, 00:14:05.133 "data_offset": 0, 00:14:05.133 "data_size": 65536 00:14:05.133 }, 00:14:05.133 { 00:14:05.133 "name": "BaseBdev4", 00:14:05.133 "uuid": "0a41edad-4986-5b94-85aa-cb8fe2fc1622", 00:14:05.133 "is_configured": true, 00:14:05.133 "data_offset": 0, 00:14:05.133 "data_size": 65536 00:14:05.133 } 00:14:05.133 ] 00:14:05.133 }' 00:14:05.133 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.133 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.393 81.88 IOPS, 245.62 MiB/s [2024-12-06T23:48:16.956Z] 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:05.393 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.393 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.393 [2024-12-06 23:48:16.877526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.393 [2024-12-06 23:48:16.877564] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.393 00:14:05.393 Latency(us) 00:14:05.393 [2024-12-06T23:48:16.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.393 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:05.393 raid_bdev1 : 8.13 81.21 243.64 0.00 0.00 17859.18 336.27 116762.83 00:14:05.393 [2024-12-06T23:48:16.956Z] =================================================================================================================== 00:14:05.393 [2024-12-06T23:48:16.956Z] Total : 81.21 243.64 0.00 0.00 17859.18 336.27 116762.83 00:14:05.393 [2024-12-06 23:48:16.937230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.393 [2024-12-06 23:48:16.937336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.393 [2024-12-06 23:48:16.937460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.393 [2024-12-06 23:48:16.937512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:05.393 { 00:14:05.393 "results": [ 00:14:05.393 { 00:14:05.393 "job": "raid_bdev1", 00:14:05.393 "core_mask": "0x1", 00:14:05.393 "workload": "randrw", 00:14:05.393 "percentage": 50, 00:14:05.393 "status": "finished", 00:14:05.393 "queue_depth": 2, 00:14:05.393 "io_size": 3145728, 00:14:05.393 "runtime": 8.126798, 00:14:05.393 "iops": 81.2127974634044, 00:14:05.393 "mibps": 243.6383923902132, 00:14:05.393 "io_failed": 0, 00:14:05.393 "io_timeout": 0, 00:14:05.393 "avg_latency_us": 17859.18196109567, 00:14:05.393 "min_latency_us": 336.2655021834061, 00:14:05.393 "max_latency_us": 116762.82969432314 00:14:05.393 } 00:14:05.393 ], 00:14:05.393 "core_count": 1 00:14:05.393 } 00:14:05.393 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.393 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.393 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:05.393 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.393 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:05.653 23:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:05.653 /dev/nbd0 00:14:05.653 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.912 1+0 records in 00:14:05.912 1+0 records out 00:14:05.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409799 s, 10.0 MB/s 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.912 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:05.913 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:05.913 /dev/nbd1 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.172 1+0 records in 00:14:06.172 1+0 records out 00:14:06.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032798 s, 12.5 MB/s 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.172 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.431 23:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:06.690 /dev/nbd1 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.690 1+0 records in 00:14:06.690 1+0 records out 00:14:06.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368032 s, 11.1 MB/s 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.690 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.950 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78667 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78667 ']' 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78667 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78667 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78667' 00:14:07.208 killing process with pid 78667 00:14:07.208 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78667 00:14:07.209 Received shutdown signal, test time was about 9.867796 seconds 00:14:07.209 00:14:07.209 Latency(us) 00:14:07.209 [2024-12-06T23:48:18.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.209 [2024-12-06T23:48:18.772Z] =================================================================================================================== 00:14:07.209 [2024-12-06T23:48:18.772Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:07.209 [2024-12-06 23:48:18.655082] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.209 23:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78667 00:14:07.777 [2024-12-06 23:48:19.044916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:08.714 00:14:08.714 real 0m13.182s 00:14:08.714 user 0m16.589s 00:14:08.714 sys 0m1.851s 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.714 ************************************ 00:14:08.714 END TEST raid_rebuild_test_io 00:14:08.714 ************************************ 00:14:08.714 23:48:20 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:08.714 23:48:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:08.714 23:48:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.714 23:48:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:08.714 ************************************ 00:14:08.714 START TEST raid_rebuild_test_sb_io 00:14:08.714 ************************************ 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:08.714 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:08.715 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:08.715 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:08.715 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79078 00:14:08.715 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79078 00:14:08.715 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:08.715 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79078 ']' 00:14:08.715 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.715 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.715 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.715 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.715 23:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.974 [2024-12-06 23:48:20.341033] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:14:08.974 [2024-12-06 23:48:20.341243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:08.974 Zero copy mechanism will not be used. 00:14:08.974 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79078 ] 00:14:08.974 [2024-12-06 23:48:20.522308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.233 [2024-12-06 23:48:20.625598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.492 [2024-12-06 23:48:20.830832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.492 [2024-12-06 23:48:20.830964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.752 BaseBdev1_malloc 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.752 [2024-12-06 23:48:21.211054] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:09.752 [2024-12-06 23:48:21.211154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.752 [2024-12-06 23:48:21.211207] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:09.752 [2024-12-06 23:48:21.211237] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.752 [2024-12-06 23:48:21.213267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.752 [2024-12-06 23:48:21.213360] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:09.752 BaseBdev1 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.752 BaseBdev2_malloc 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.752 [2024-12-06 23:48:21.264285] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:09.752 [2024-12-06 23:48:21.264401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.752 [2024-12-06 23:48:21.264438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:09.752 [2024-12-06 23:48:21.264487] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.752 [2024-12-06 23:48:21.266517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.752 [2024-12-06 23:48:21.266589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:09.752 BaseBdev2 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.752 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.011 BaseBdev3_malloc 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.011 [2024-12-06 23:48:21.349433] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:10.011 [2024-12-06 23:48:21.349486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.011 [2024-12-06 23:48:21.349507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:10.011 [2024-12-06 23:48:21.349518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.011 [2024-12-06 23:48:21.351491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.011 [2024-12-06 23:48:21.351542] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:10.011 BaseBdev3 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.011 BaseBdev4_malloc 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.011 [2024-12-06 23:48:21.399831] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:10.011 [2024-12-06 23:48:21.399886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.011 [2024-12-06 23:48:21.399907] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:10.011 [2024-12-06 23:48:21.399917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.011 [2024-12-06 23:48:21.401852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.011 [2024-12-06 23:48:21.401959] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:10.011 BaseBdev4 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.011 spare_malloc 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.011 spare_delay 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.011 [2024-12-06 23:48:21.468168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:10.011 [2024-12-06 23:48:21.468272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.011 [2024-12-06 23:48:21.468292] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:10.011 [2024-12-06 23:48:21.468303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.011 [2024-12-06 23:48:21.470281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.011 [2024-12-06 23:48:21.470323] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:10.011 spare 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.011 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.011 [2024-12-06 23:48:21.480196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.011 [2024-12-06 23:48:21.481925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.011 [2024-12-06 23:48:21.481990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.012 [2024-12-06 23:48:21.482040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:10.012 [2024-12-06 23:48:21.482211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:10.012 [2024-12-06 23:48:21.482226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:10.012 [2024-12-06 23:48:21.482460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:10.012 [2024-12-06 23:48:21.482637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:10.012 [2024-12-06 23:48:21.482646] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:10.012 [2024-12-06 23:48:21.482785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.012 "name": "raid_bdev1", 00:14:10.012 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:10.012 "strip_size_kb": 0, 00:14:10.012 "state": "online", 00:14:10.012 "raid_level": "raid1", 00:14:10.012 "superblock": true, 00:14:10.012 "num_base_bdevs": 4, 00:14:10.012 "num_base_bdevs_discovered": 4, 00:14:10.012 "num_base_bdevs_operational": 4, 00:14:10.012 "base_bdevs_list": [ 00:14:10.012 { 00:14:10.012 "name": "BaseBdev1", 00:14:10.012 "uuid": "c67734f5-7af5-548a-b4b5-64246a4a7f02", 00:14:10.012 "is_configured": true, 00:14:10.012 "data_offset": 2048, 00:14:10.012 "data_size": 63488 00:14:10.012 }, 00:14:10.012 { 00:14:10.012 "name": "BaseBdev2", 00:14:10.012 "uuid": "a56a8a26-93dd-5283-9c97-fc58edf0666c", 00:14:10.012 "is_configured": true, 00:14:10.012 "data_offset": 2048, 00:14:10.012 "data_size": 63488 00:14:10.012 }, 00:14:10.012 { 00:14:10.012 "name": "BaseBdev3", 00:14:10.012 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:10.012 "is_configured": true, 00:14:10.012 "data_offset": 2048, 00:14:10.012 "data_size": 63488 00:14:10.012 }, 00:14:10.012 { 00:14:10.012 "name": "BaseBdev4", 00:14:10.012 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:10.012 "is_configured": true, 00:14:10.012 "data_offset": 2048, 00:14:10.012 "data_size": 63488 00:14:10.012 } 00:14:10.012 ] 00:14:10.012 }' 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.012 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.591 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:10.591 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:10.591 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.591 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.591 [2024-12-06 23:48:21.915933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.591 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.591 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:10.591 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.591 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:10.591 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.591 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.591 23:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.591 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:10.591 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:10.591 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:10.591 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:10.591 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.591 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.591 [2024-12-06 23:48:22.011572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:10.591 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.591 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:10.591 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.591 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.592 "name": "raid_bdev1", 00:14:10.592 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:10.592 "strip_size_kb": 0, 00:14:10.592 "state": "online", 00:14:10.592 "raid_level": "raid1", 00:14:10.592 "superblock": true, 00:14:10.592 "num_base_bdevs": 4, 00:14:10.592 "num_base_bdevs_discovered": 3, 00:14:10.592 "num_base_bdevs_operational": 3, 00:14:10.592 "base_bdevs_list": [ 00:14:10.592 { 00:14:10.592 "name": null, 00:14:10.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.592 "is_configured": false, 00:14:10.592 "data_offset": 0, 00:14:10.592 "data_size": 63488 00:14:10.592 }, 00:14:10.592 { 00:14:10.592 "name": "BaseBdev2", 00:14:10.592 "uuid": "a56a8a26-93dd-5283-9c97-fc58edf0666c", 00:14:10.592 "is_configured": true, 00:14:10.592 "data_offset": 2048, 00:14:10.592 "data_size": 63488 00:14:10.592 }, 00:14:10.592 { 00:14:10.592 "name": "BaseBdev3", 00:14:10.592 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:10.592 "is_configured": true, 00:14:10.592 "data_offset": 2048, 00:14:10.592 "data_size": 63488 00:14:10.592 }, 00:14:10.592 { 00:14:10.592 "name": "BaseBdev4", 00:14:10.592 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:10.592 "is_configured": true, 00:14:10.592 "data_offset": 2048, 00:14:10.592 "data_size": 63488 00:14:10.592 } 00:14:10.592 ] 00:14:10.592 }' 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.592 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.592 [2024-12-06 23:48:22.103526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:10.592 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:10.592 Zero copy mechanism will not be used. 00:14:10.592 Running I/O for 60 seconds... 00:14:11.166 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:11.166 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.166 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.166 [2024-12-06 23:48:22.475549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.166 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.166 23:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:11.166 [2024-12-06 23:48:22.533847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:11.166 [2024-12-06 23:48:22.535899] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:11.166 [2024-12-06 23:48:22.644952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:11.166 [2024-12-06 23:48:22.646189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:11.436 [2024-12-06 23:48:22.875040] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:11.436 [2024-12-06 23:48:22.875820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:11.708 146.00 IOPS, 438.00 MiB/s [2024-12-06T23:48:23.271Z] [2024-12-06 23:48:23.223320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:11.983 [2024-12-06 23:48:23.344347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:11.983 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.983 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.983 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.983 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.983 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.983 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.983 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.983 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.983 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.269 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.269 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.269 "name": "raid_bdev1", 00:14:12.269 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:12.269 "strip_size_kb": 0, 00:14:12.269 "state": "online", 00:14:12.269 "raid_level": "raid1", 00:14:12.269 "superblock": true, 00:14:12.269 "num_base_bdevs": 4, 00:14:12.269 "num_base_bdevs_discovered": 4, 00:14:12.269 "num_base_bdevs_operational": 4, 00:14:12.269 "process": { 00:14:12.269 "type": "rebuild", 00:14:12.269 "target": "spare", 00:14:12.269 "progress": { 00:14:12.269 "blocks": 10240, 00:14:12.269 "percent": 16 00:14:12.269 } 00:14:12.269 }, 00:14:12.269 "base_bdevs_list": [ 00:14:12.269 { 00:14:12.269 "name": "spare", 00:14:12.269 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:12.269 "is_configured": true, 00:14:12.269 "data_offset": 2048, 00:14:12.269 "data_size": 63488 00:14:12.269 }, 00:14:12.269 { 00:14:12.269 "name": "BaseBdev2", 00:14:12.269 "uuid": "a56a8a26-93dd-5283-9c97-fc58edf0666c", 00:14:12.269 "is_configured": true, 00:14:12.269 "data_offset": 2048, 00:14:12.269 "data_size": 63488 00:14:12.269 }, 00:14:12.269 { 00:14:12.269 "name": "BaseBdev3", 00:14:12.269 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:12.270 "is_configured": true, 00:14:12.270 "data_offset": 2048, 00:14:12.270 "data_size": 63488 00:14:12.270 }, 00:14:12.270 { 00:14:12.270 "name": "BaseBdev4", 00:14:12.270 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:12.270 "is_configured": true, 00:14:12.270 "data_offset": 2048, 00:14:12.270 "data_size": 63488 00:14:12.270 } 00:14:12.270 ] 00:14:12.270 }' 00:14:12.270 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.270 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.270 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.270 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.270 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:12.270 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.270 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.270 [2024-12-06 23:48:23.678509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.270 [2024-12-06 23:48:23.680104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:12.270 [2024-12-06 23:48:23.680529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:12.270 [2024-12-06 23:48:23.783047] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:12.270 [2024-12-06 23:48:23.792241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.270 [2024-12-06 23:48:23.792322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.270 [2024-12-06 23:48:23.792352] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:12.542 [2024-12-06 23:48:23.820588] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.542 "name": "raid_bdev1", 00:14:12.542 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:12.542 "strip_size_kb": 0, 00:14:12.542 "state": "online", 00:14:12.542 "raid_level": "raid1", 00:14:12.542 "superblock": true, 00:14:12.542 "num_base_bdevs": 4, 00:14:12.542 "num_base_bdevs_discovered": 3, 00:14:12.542 "num_base_bdevs_operational": 3, 00:14:12.542 "base_bdevs_list": [ 00:14:12.542 { 00:14:12.542 "name": null, 00:14:12.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.542 "is_configured": false, 00:14:12.542 "data_offset": 0, 00:14:12.542 "data_size": 63488 00:14:12.542 }, 00:14:12.542 { 00:14:12.542 "name": "BaseBdev2", 00:14:12.542 "uuid": "a56a8a26-93dd-5283-9c97-fc58edf0666c", 00:14:12.542 "is_configured": true, 00:14:12.542 "data_offset": 2048, 00:14:12.542 "data_size": 63488 00:14:12.542 }, 00:14:12.542 { 00:14:12.542 "name": "BaseBdev3", 00:14:12.542 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:12.542 "is_configured": true, 00:14:12.542 "data_offset": 2048, 00:14:12.542 "data_size": 63488 00:14:12.542 }, 00:14:12.542 { 00:14:12.542 "name": "BaseBdev4", 00:14:12.542 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:12.542 "is_configured": true, 00:14:12.542 "data_offset": 2048, 00:14:12.542 "data_size": 63488 00:14:12.542 } 00:14:12.542 ] 00:14:12.542 }' 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.542 23:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.828 135.00 IOPS, 405.00 MiB/s [2024-12-06T23:48:24.391Z] 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.828 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.828 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.828 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.828 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.828 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.828 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.828 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.828 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.828 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.828 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.828 "name": "raid_bdev1", 00:14:12.828 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:12.828 "strip_size_kb": 0, 00:14:12.828 "state": "online", 00:14:12.828 "raid_level": "raid1", 00:14:12.828 "superblock": true, 00:14:12.828 "num_base_bdevs": 4, 00:14:12.828 "num_base_bdevs_discovered": 3, 00:14:12.828 "num_base_bdevs_operational": 3, 00:14:12.828 "base_bdevs_list": [ 00:14:12.828 { 00:14:12.828 "name": null, 00:14:12.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.828 "is_configured": false, 00:14:12.828 "data_offset": 0, 00:14:12.828 "data_size": 63488 00:14:12.828 }, 00:14:12.828 { 00:14:12.828 "name": "BaseBdev2", 00:14:12.828 "uuid": "a56a8a26-93dd-5283-9c97-fc58edf0666c", 00:14:12.828 "is_configured": true, 00:14:12.828 "data_offset": 2048, 00:14:12.828 "data_size": 63488 00:14:12.828 }, 00:14:12.828 { 00:14:12.828 "name": "BaseBdev3", 00:14:12.828 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:12.828 "is_configured": true, 00:14:12.828 "data_offset": 2048, 00:14:12.828 "data_size": 63488 00:14:12.828 }, 00:14:12.828 { 00:14:12.828 "name": "BaseBdev4", 00:14:12.828 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:12.828 "is_configured": true, 00:14:12.828 "data_offset": 2048, 00:14:12.828 "data_size": 63488 00:14:12.828 } 00:14:12.828 ] 00:14:12.828 }' 00:14:12.828 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.089 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.089 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.089 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.089 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.089 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.089 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.089 [2024-12-06 23:48:24.458957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.089 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.089 23:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:13.089 [2024-12-06 23:48:24.510648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:13.089 [2024-12-06 23:48:24.512553] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:13.089 [2024-12-06 23:48:24.621490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:13.089 [2024-12-06 23:48:24.622098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:13.349 [2024-12-06 23:48:24.738668] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:13.349 [2024-12-06 23:48:24.739028] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:13.610 [2024-12-06 23:48:24.985722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:13.610 [2024-12-06 23:48:24.986265] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:13.610 155.33 IOPS, 466.00 MiB/s [2024-12-06T23:48:25.173Z] [2024-12-06 23:48:25.109778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:13.610 [2024-12-06 23:48:25.110483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.179 [2024-12-06 23:48:25.538455] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:14.179 [2024-12-06 23:48:25.538742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.179 "name": "raid_bdev1", 00:14:14.179 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:14.179 "strip_size_kb": 0, 00:14:14.179 "state": "online", 00:14:14.179 "raid_level": "raid1", 00:14:14.179 "superblock": true, 00:14:14.179 "num_base_bdevs": 4, 00:14:14.179 "num_base_bdevs_discovered": 4, 00:14:14.179 "num_base_bdevs_operational": 4, 00:14:14.179 "process": { 00:14:14.179 "type": "rebuild", 00:14:14.179 "target": "spare", 00:14:14.179 "progress": { 00:14:14.179 "blocks": 14336, 00:14:14.179 "percent": 22 00:14:14.179 } 00:14:14.179 }, 00:14:14.179 "base_bdevs_list": [ 00:14:14.179 { 00:14:14.179 "name": "spare", 00:14:14.179 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:14.179 "is_configured": true, 00:14:14.179 "data_offset": 2048, 00:14:14.179 "data_size": 63488 00:14:14.179 }, 00:14:14.179 { 00:14:14.179 "name": "BaseBdev2", 00:14:14.179 "uuid": "a56a8a26-93dd-5283-9c97-fc58edf0666c", 00:14:14.179 "is_configured": true, 00:14:14.179 "data_offset": 2048, 00:14:14.179 "data_size": 63488 00:14:14.179 }, 00:14:14.179 { 00:14:14.179 "name": "BaseBdev3", 00:14:14.179 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:14.179 "is_configured": true, 00:14:14.179 "data_offset": 2048, 00:14:14.179 "data_size": 63488 00:14:14.179 }, 00:14:14.179 { 00:14:14.179 "name": "BaseBdev4", 00:14:14.179 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:14.179 "is_configured": true, 00:14:14.179 "data_offset": 2048, 00:14:14.179 "data_size": 63488 00:14:14.179 } 00:14:14.179 ] 00:14:14.179 }' 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:14.179 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.179 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.179 [2024-12-06 23:48:25.632892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:14.438 [2024-12-06 23:48:25.871215] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:14.438 [2024-12-06 23:48:25.871310] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.438 "name": "raid_bdev1", 00:14:14.438 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:14.438 "strip_size_kb": 0, 00:14:14.438 "state": "online", 00:14:14.438 "raid_level": "raid1", 00:14:14.438 "superblock": true, 00:14:14.438 "num_base_bdevs": 4, 00:14:14.438 "num_base_bdevs_discovered": 3, 00:14:14.438 "num_base_bdevs_operational": 3, 00:14:14.438 "process": { 00:14:14.438 "type": "rebuild", 00:14:14.438 "target": "spare", 00:14:14.438 "progress": { 00:14:14.438 "blocks": 18432, 00:14:14.438 "percent": 29 00:14:14.438 } 00:14:14.438 }, 00:14:14.438 "base_bdevs_list": [ 00:14:14.438 { 00:14:14.438 "name": "spare", 00:14:14.438 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:14.438 "is_configured": true, 00:14:14.438 "data_offset": 2048, 00:14:14.438 "data_size": 63488 00:14:14.438 }, 00:14:14.438 { 00:14:14.438 "name": null, 00:14:14.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.438 "is_configured": false, 00:14:14.438 "data_offset": 0, 00:14:14.438 "data_size": 63488 00:14:14.438 }, 00:14:14.438 { 00:14:14.438 "name": "BaseBdev3", 00:14:14.438 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:14.438 "is_configured": true, 00:14:14.438 "data_offset": 2048, 00:14:14.438 "data_size": 63488 00:14:14.438 }, 00:14:14.438 { 00:14:14.438 "name": "BaseBdev4", 00:14:14.438 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:14.438 "is_configured": true, 00:14:14.438 "data_offset": 2048, 00:14:14.438 "data_size": 63488 00:14:14.438 } 00:14:14.438 ] 00:14:14.438 }' 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.438 23:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=495 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.697 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.697 [2024-12-06 23:48:26.100506] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:14.697 [2024-12-06 23:48:26.101102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:14.698 135.25 IOPS, 405.75 MiB/s [2024-12-06T23:48:26.261Z] 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.698 "name": "raid_bdev1", 00:14:14.698 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:14.698 "strip_size_kb": 0, 00:14:14.698 "state": "online", 00:14:14.698 "raid_level": "raid1", 00:14:14.698 "superblock": true, 00:14:14.698 "num_base_bdevs": 4, 00:14:14.698 "num_base_bdevs_discovered": 3, 00:14:14.698 "num_base_bdevs_operational": 3, 00:14:14.698 "process": { 00:14:14.698 "type": "rebuild", 00:14:14.698 "target": "spare", 00:14:14.698 "progress": { 00:14:14.698 "blocks": 20480, 00:14:14.698 "percent": 32 00:14:14.698 } 00:14:14.698 }, 00:14:14.698 "base_bdevs_list": [ 00:14:14.698 { 00:14:14.698 "name": "spare", 00:14:14.698 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:14.698 "is_configured": true, 00:14:14.698 "data_offset": 2048, 00:14:14.698 "data_size": 63488 00:14:14.698 }, 00:14:14.698 { 00:14:14.698 "name": null, 00:14:14.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.698 "is_configured": false, 00:14:14.698 "data_offset": 0, 00:14:14.698 "data_size": 63488 00:14:14.698 }, 00:14:14.698 { 00:14:14.698 "name": "BaseBdev3", 00:14:14.698 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:14.698 "is_configured": true, 00:14:14.698 "data_offset": 2048, 00:14:14.698 "data_size": 63488 00:14:14.698 }, 00:14:14.698 { 00:14:14.698 "name": "BaseBdev4", 00:14:14.698 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:14.698 "is_configured": true, 00:14:14.698 "data_offset": 2048, 00:14:14.698 "data_size": 63488 00:14:14.698 } 00:14:14.698 ] 00:14:14.698 }' 00:14:14.698 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.698 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.698 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.698 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.698 23:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:14.957 [2024-12-06 23:48:26.426616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:15.217 [2024-12-06 23:48:26.539375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:15.217 [2024-12-06 23:48:26.539735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:15.477 [2024-12-06 23:48:26.855645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:15.737 122.80 IOPS, 368.40 MiB/s [2024-12-06T23:48:27.300Z] 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.737 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.737 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.737 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.737 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.737 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.737 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.737 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.737 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.737 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.737 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.737 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.737 "name": "raid_bdev1", 00:14:15.737 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:15.737 "strip_size_kb": 0, 00:14:15.737 "state": "online", 00:14:15.737 "raid_level": "raid1", 00:14:15.737 "superblock": true, 00:14:15.737 "num_base_bdevs": 4, 00:14:15.737 "num_base_bdevs_discovered": 3, 00:14:15.737 "num_base_bdevs_operational": 3, 00:14:15.737 "process": { 00:14:15.737 "type": "rebuild", 00:14:15.737 "target": "spare", 00:14:15.737 "progress": { 00:14:15.737 "blocks": 38912, 00:14:15.737 "percent": 61 00:14:15.737 } 00:14:15.737 }, 00:14:15.737 "base_bdevs_list": [ 00:14:15.737 { 00:14:15.737 "name": "spare", 00:14:15.737 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:15.737 "is_configured": true, 00:14:15.737 "data_offset": 2048, 00:14:15.737 "data_size": 63488 00:14:15.737 }, 00:14:15.737 { 00:14:15.737 "name": null, 00:14:15.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.737 "is_configured": false, 00:14:15.737 "data_offset": 0, 00:14:15.737 "data_size": 63488 00:14:15.737 }, 00:14:15.737 { 00:14:15.737 "name": "BaseBdev3", 00:14:15.737 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:15.737 "is_configured": true, 00:14:15.737 "data_offset": 2048, 00:14:15.737 "data_size": 63488 00:14:15.737 }, 00:14:15.737 { 00:14:15.737 "name": "BaseBdev4", 00:14:15.737 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:15.737 "is_configured": true, 00:14:15.737 "data_offset": 2048, 00:14:15.737 "data_size": 63488 00:14:15.737 } 00:14:15.737 ] 00:14:15.737 }' 00:14:15.737 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.997 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.997 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.997 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.997 23:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:16.258 [2024-12-06 23:48:27.584491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:16.828 108.00 IOPS, 324.00 MiB/s [2024-12-06T23:48:28.391Z] [2024-12-06 23:48:28.131474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:16.828 [2024-12-06 23:48:28.346551] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:16.828 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.828 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.829 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.829 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.829 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.829 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.829 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.829 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.829 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.829 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.829 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.089 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.089 "name": "raid_bdev1", 00:14:17.089 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:17.089 "strip_size_kb": 0, 00:14:17.089 "state": "online", 00:14:17.089 "raid_level": "raid1", 00:14:17.089 "superblock": true, 00:14:17.089 "num_base_bdevs": 4, 00:14:17.089 "num_base_bdevs_discovered": 3, 00:14:17.089 "num_base_bdevs_operational": 3, 00:14:17.089 "process": { 00:14:17.089 "type": "rebuild", 00:14:17.089 "target": "spare", 00:14:17.089 "progress": { 00:14:17.089 "blocks": 59392, 00:14:17.089 "percent": 93 00:14:17.089 } 00:14:17.089 }, 00:14:17.089 "base_bdevs_list": [ 00:14:17.089 { 00:14:17.089 "name": "spare", 00:14:17.089 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:17.089 "is_configured": true, 00:14:17.089 "data_offset": 2048, 00:14:17.089 "data_size": 63488 00:14:17.089 }, 00:14:17.089 { 00:14:17.089 "name": null, 00:14:17.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.089 "is_configured": false, 00:14:17.089 "data_offset": 0, 00:14:17.089 "data_size": 63488 00:14:17.089 }, 00:14:17.089 { 00:14:17.089 "name": "BaseBdev3", 00:14:17.089 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:17.089 "is_configured": true, 00:14:17.089 "data_offset": 2048, 00:14:17.089 "data_size": 63488 00:14:17.089 }, 00:14:17.089 { 00:14:17.089 "name": "BaseBdev4", 00:14:17.089 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:17.089 "is_configured": true, 00:14:17.089 "data_offset": 2048, 00:14:17.089 "data_size": 63488 00:14:17.089 } 00:14:17.089 ] 00:14:17.089 }' 00:14:17.089 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.089 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.089 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.089 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.089 23:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.089 [2024-12-06 23:48:28.569337] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:17.349 [2024-12-06 23:48:28.669159] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:17.349 [2024-12-06 23:48:28.671738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.180 97.71 IOPS, 293.14 MiB/s [2024-12-06T23:48:29.743Z] 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.180 "name": "raid_bdev1", 00:14:18.180 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:18.180 "strip_size_kb": 0, 00:14:18.180 "state": "online", 00:14:18.180 "raid_level": "raid1", 00:14:18.180 "superblock": true, 00:14:18.180 "num_base_bdevs": 4, 00:14:18.180 "num_base_bdevs_discovered": 3, 00:14:18.180 "num_base_bdevs_operational": 3, 00:14:18.180 "base_bdevs_list": [ 00:14:18.180 { 00:14:18.180 "name": "spare", 00:14:18.180 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:18.180 "is_configured": true, 00:14:18.180 "data_offset": 2048, 00:14:18.180 "data_size": 63488 00:14:18.180 }, 00:14:18.180 { 00:14:18.180 "name": null, 00:14:18.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.180 "is_configured": false, 00:14:18.180 "data_offset": 0, 00:14:18.180 "data_size": 63488 00:14:18.180 }, 00:14:18.180 { 00:14:18.180 "name": "BaseBdev3", 00:14:18.180 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:18.180 "is_configured": true, 00:14:18.180 "data_offset": 2048, 00:14:18.180 "data_size": 63488 00:14:18.180 }, 00:14:18.180 { 00:14:18.180 "name": "BaseBdev4", 00:14:18.180 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:18.180 "is_configured": true, 00:14:18.180 "data_offset": 2048, 00:14:18.180 "data_size": 63488 00:14:18.180 } 00:14:18.180 ] 00:14:18.180 }' 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.180 "name": "raid_bdev1", 00:14:18.180 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:18.180 "strip_size_kb": 0, 00:14:18.180 "state": "online", 00:14:18.180 "raid_level": "raid1", 00:14:18.180 "superblock": true, 00:14:18.180 "num_base_bdevs": 4, 00:14:18.180 "num_base_bdevs_discovered": 3, 00:14:18.180 "num_base_bdevs_operational": 3, 00:14:18.180 "base_bdevs_list": [ 00:14:18.180 { 00:14:18.180 "name": "spare", 00:14:18.180 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:18.180 "is_configured": true, 00:14:18.180 "data_offset": 2048, 00:14:18.180 "data_size": 63488 00:14:18.180 }, 00:14:18.180 { 00:14:18.180 "name": null, 00:14:18.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.180 "is_configured": false, 00:14:18.180 "data_offset": 0, 00:14:18.180 "data_size": 63488 00:14:18.180 }, 00:14:18.180 { 00:14:18.180 "name": "BaseBdev3", 00:14:18.180 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:18.180 "is_configured": true, 00:14:18.180 "data_offset": 2048, 00:14:18.180 "data_size": 63488 00:14:18.180 }, 00:14:18.180 { 00:14:18.180 "name": "BaseBdev4", 00:14:18.180 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:18.180 "is_configured": true, 00:14:18.180 "data_offset": 2048, 00:14:18.180 "data_size": 63488 00:14:18.180 } 00:14:18.180 ] 00:14:18.180 }' 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.180 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.441 "name": "raid_bdev1", 00:14:18.441 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:18.441 "strip_size_kb": 0, 00:14:18.441 "state": "online", 00:14:18.441 "raid_level": "raid1", 00:14:18.441 "superblock": true, 00:14:18.441 "num_base_bdevs": 4, 00:14:18.441 "num_base_bdevs_discovered": 3, 00:14:18.441 "num_base_bdevs_operational": 3, 00:14:18.441 "base_bdevs_list": [ 00:14:18.441 { 00:14:18.441 "name": "spare", 00:14:18.441 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:18.441 "is_configured": true, 00:14:18.441 "data_offset": 2048, 00:14:18.441 "data_size": 63488 00:14:18.441 }, 00:14:18.441 { 00:14:18.441 "name": null, 00:14:18.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.441 "is_configured": false, 00:14:18.441 "data_offset": 0, 00:14:18.441 "data_size": 63488 00:14:18.441 }, 00:14:18.441 { 00:14:18.441 "name": "BaseBdev3", 00:14:18.441 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:18.441 "is_configured": true, 00:14:18.441 "data_offset": 2048, 00:14:18.441 "data_size": 63488 00:14:18.441 }, 00:14:18.441 { 00:14:18.441 "name": "BaseBdev4", 00:14:18.441 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:18.441 "is_configured": true, 00:14:18.441 "data_offset": 2048, 00:14:18.441 "data_size": 63488 00:14:18.441 } 00:14:18.441 ] 00:14:18.441 }' 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.441 23:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.961 89.25 IOPS, 267.75 MiB/s [2024-12-06T23:48:30.524Z] 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:18.961 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.961 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.962 [2024-12-06 23:48:30.278059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:18.962 [2024-12-06 23:48:30.278138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.962 00:14:18.962 Latency(us) 00:14:18.962 [2024-12-06T23:48:30.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.962 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:18.962 raid_bdev1 : 8.29 87.84 263.51 0.00 0.00 15936.80 334.48 118136.51 00:14:18.962 [2024-12-06T23:48:30.525Z] =================================================================================================================== 00:14:18.962 [2024-12-06T23:48:30.525Z] Total : 87.84 263.51 0.00 0.00 15936.80 334.48 118136.51 00:14:18.962 [2024-12-06 23:48:30.398011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.962 [2024-12-06 23:48:30.398106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.962 [2024-12-06 23:48:30.398236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.962 [2024-12-06 23:48:30.398289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:18.962 { 00:14:18.962 "results": [ 00:14:18.962 { 00:14:18.962 "job": "raid_bdev1", 00:14:18.962 "core_mask": "0x1", 00:14:18.962 "workload": "randrw", 00:14:18.962 "percentage": 50, 00:14:18.962 "status": "finished", 00:14:18.962 "queue_depth": 2, 00:14:18.962 "io_size": 3145728, 00:14:18.962 "runtime": 8.288266, 00:14:18.962 "iops": 87.83501880851797, 00:14:18.962 "mibps": 263.5050564255539, 00:14:18.962 "io_failed": 0, 00:14:18.962 "io_timeout": 0, 00:14:18.962 "avg_latency_us": 15936.80034550602, 00:14:18.962 "min_latency_us": 334.4768558951965, 00:14:18.962 "max_latency_us": 118136.51004366812 00:14:18.962 } 00:14:18.962 ], 00:14:18.962 "core_count": 1 00:14:18.962 } 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.962 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:19.222 /dev/nbd0 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.222 1+0 records in 00:14:19.222 1+0 records out 00:14:19.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558663 s, 7.3 MB/s 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:19.222 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:19.223 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:19.483 /dev/nbd1 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.483 1+0 records in 00:14:19.483 1+0 records out 00:14:19.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615757 s, 6.7 MB/s 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:19.483 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:19.484 23:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:19.743 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:19.743 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.744 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:19.744 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:19.744 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:19.744 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:19.744 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.004 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:20.004 /dev/nbd1 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.264 1+0 records in 00:14:20.264 1+0 records out 00:14:20.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033178 s, 12.3 MB/s 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.264 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.525 23:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.785 [2024-12-06 23:48:32.143326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:20.785 [2024-12-06 23:48:32.143425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.785 [2024-12-06 23:48:32.143490] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:20.785 [2024-12-06 23:48:32.143521] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.785 [2024-12-06 23:48:32.145584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.785 [2024-12-06 23:48:32.145657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:20.785 [2024-12-06 23:48:32.145769] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:20.785 [2024-12-06 23:48:32.145857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:20.785 [2024-12-06 23:48:32.145997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.785 [2024-12-06 23:48:32.146094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:20.785 spare 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.785 [2024-12-06 23:48:32.245990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:20.785 [2024-12-06 23:48:32.246055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:20.785 [2024-12-06 23:48:32.246352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:20.785 [2024-12-06 23:48:32.246564] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:20.785 [2024-12-06 23:48:32.246614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:20.785 [2024-12-06 23:48:32.246820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.785 "name": "raid_bdev1", 00:14:20.785 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:20.785 "strip_size_kb": 0, 00:14:20.785 "state": "online", 00:14:20.785 "raid_level": "raid1", 00:14:20.785 "superblock": true, 00:14:20.785 "num_base_bdevs": 4, 00:14:20.785 "num_base_bdevs_discovered": 3, 00:14:20.785 "num_base_bdevs_operational": 3, 00:14:20.785 "base_bdevs_list": [ 00:14:20.785 { 00:14:20.785 "name": "spare", 00:14:20.785 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:20.785 "is_configured": true, 00:14:20.785 "data_offset": 2048, 00:14:20.785 "data_size": 63488 00:14:20.785 }, 00:14:20.785 { 00:14:20.785 "name": null, 00:14:20.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.785 "is_configured": false, 00:14:20.785 "data_offset": 2048, 00:14:20.785 "data_size": 63488 00:14:20.785 }, 00:14:20.785 { 00:14:20.785 "name": "BaseBdev3", 00:14:20.785 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:20.785 "is_configured": true, 00:14:20.785 "data_offset": 2048, 00:14:20.785 "data_size": 63488 00:14:20.785 }, 00:14:20.785 { 00:14:20.785 "name": "BaseBdev4", 00:14:20.785 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:20.785 "is_configured": true, 00:14:20.785 "data_offset": 2048, 00:14:20.785 "data_size": 63488 00:14:20.785 } 00:14:20.785 ] 00:14:20.785 }' 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.785 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.356 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.356 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.356 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.356 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.356 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.356 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.356 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.356 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.356 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.356 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.356 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.356 "name": "raid_bdev1", 00:14:21.356 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:21.356 "strip_size_kb": 0, 00:14:21.356 "state": "online", 00:14:21.356 "raid_level": "raid1", 00:14:21.356 "superblock": true, 00:14:21.356 "num_base_bdevs": 4, 00:14:21.356 "num_base_bdevs_discovered": 3, 00:14:21.356 "num_base_bdevs_operational": 3, 00:14:21.356 "base_bdevs_list": [ 00:14:21.356 { 00:14:21.356 "name": "spare", 00:14:21.356 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:21.356 "is_configured": true, 00:14:21.356 "data_offset": 2048, 00:14:21.356 "data_size": 63488 00:14:21.356 }, 00:14:21.356 { 00:14:21.356 "name": null, 00:14:21.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.356 "is_configured": false, 00:14:21.356 "data_offset": 2048, 00:14:21.356 "data_size": 63488 00:14:21.356 }, 00:14:21.356 { 00:14:21.357 "name": "BaseBdev3", 00:14:21.357 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:21.357 "is_configured": true, 00:14:21.357 "data_offset": 2048, 00:14:21.357 "data_size": 63488 00:14:21.357 }, 00:14:21.357 { 00:14:21.357 "name": "BaseBdev4", 00:14:21.357 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:21.357 "is_configured": true, 00:14:21.357 "data_offset": 2048, 00:14:21.357 "data_size": 63488 00:14:21.357 } 00:14:21.357 ] 00:14:21.357 }' 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.357 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.357 [2024-12-06 23:48:32.914121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.617 "name": "raid_bdev1", 00:14:21.617 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:21.617 "strip_size_kb": 0, 00:14:21.617 "state": "online", 00:14:21.617 "raid_level": "raid1", 00:14:21.617 "superblock": true, 00:14:21.617 "num_base_bdevs": 4, 00:14:21.617 "num_base_bdevs_discovered": 2, 00:14:21.617 "num_base_bdevs_operational": 2, 00:14:21.617 "base_bdevs_list": [ 00:14:21.617 { 00:14:21.617 "name": null, 00:14:21.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.617 "is_configured": false, 00:14:21.617 "data_offset": 0, 00:14:21.617 "data_size": 63488 00:14:21.617 }, 00:14:21.617 { 00:14:21.617 "name": null, 00:14:21.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.617 "is_configured": false, 00:14:21.617 "data_offset": 2048, 00:14:21.617 "data_size": 63488 00:14:21.617 }, 00:14:21.617 { 00:14:21.617 "name": "BaseBdev3", 00:14:21.617 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:21.617 "is_configured": true, 00:14:21.617 "data_offset": 2048, 00:14:21.617 "data_size": 63488 00:14:21.617 }, 00:14:21.617 { 00:14:21.617 "name": "BaseBdev4", 00:14:21.617 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:21.617 "is_configured": true, 00:14:21.617 "data_offset": 2048, 00:14:21.617 "data_size": 63488 00:14:21.617 } 00:14:21.617 ] 00:14:21.617 }' 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.617 23:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 23:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.878 23:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.878 23:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 [2024-12-06 23:48:33.381427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.878 [2024-12-06 23:48:33.381573] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:21.878 [2024-12-06 23:48:33.381586] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:21.878 [2024-12-06 23:48:33.381625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.878 [2024-12-06 23:48:33.395863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:21.878 23:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.878 23:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:21.878 [2024-12-06 23:48:33.397727] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:23.261 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.261 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.261 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.261 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.261 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.261 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.262 "name": "raid_bdev1", 00:14:23.262 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:23.262 "strip_size_kb": 0, 00:14:23.262 "state": "online", 00:14:23.262 "raid_level": "raid1", 00:14:23.262 "superblock": true, 00:14:23.262 "num_base_bdevs": 4, 00:14:23.262 "num_base_bdevs_discovered": 3, 00:14:23.262 "num_base_bdevs_operational": 3, 00:14:23.262 "process": { 00:14:23.262 "type": "rebuild", 00:14:23.262 "target": "spare", 00:14:23.262 "progress": { 00:14:23.262 "blocks": 20480, 00:14:23.262 "percent": 32 00:14:23.262 } 00:14:23.262 }, 00:14:23.262 "base_bdevs_list": [ 00:14:23.262 { 00:14:23.262 "name": "spare", 00:14:23.262 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:23.262 "is_configured": true, 00:14:23.262 "data_offset": 2048, 00:14:23.262 "data_size": 63488 00:14:23.262 }, 00:14:23.262 { 00:14:23.262 "name": null, 00:14:23.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.262 "is_configured": false, 00:14:23.262 "data_offset": 2048, 00:14:23.262 "data_size": 63488 00:14:23.262 }, 00:14:23.262 { 00:14:23.262 "name": "BaseBdev3", 00:14:23.262 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:23.262 "is_configured": true, 00:14:23.262 "data_offset": 2048, 00:14:23.262 "data_size": 63488 00:14:23.262 }, 00:14:23.262 { 00:14:23.262 "name": "BaseBdev4", 00:14:23.262 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:23.262 "is_configured": true, 00:14:23.262 "data_offset": 2048, 00:14:23.262 "data_size": 63488 00:14:23.262 } 00:14:23.262 ] 00:14:23.262 }' 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.262 [2024-12-06 23:48:34.562019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.262 [2024-12-06 23:48:34.602383] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:23.262 [2024-12-06 23:48:34.602488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.262 [2024-12-06 23:48:34.602542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.262 [2024-12-06 23:48:34.602563] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.262 "name": "raid_bdev1", 00:14:23.262 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:23.262 "strip_size_kb": 0, 00:14:23.262 "state": "online", 00:14:23.262 "raid_level": "raid1", 00:14:23.262 "superblock": true, 00:14:23.262 "num_base_bdevs": 4, 00:14:23.262 "num_base_bdevs_discovered": 2, 00:14:23.262 "num_base_bdevs_operational": 2, 00:14:23.262 "base_bdevs_list": [ 00:14:23.262 { 00:14:23.262 "name": null, 00:14:23.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.262 "is_configured": false, 00:14:23.262 "data_offset": 0, 00:14:23.262 "data_size": 63488 00:14:23.262 }, 00:14:23.262 { 00:14:23.262 "name": null, 00:14:23.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.262 "is_configured": false, 00:14:23.262 "data_offset": 2048, 00:14:23.262 "data_size": 63488 00:14:23.262 }, 00:14:23.262 { 00:14:23.262 "name": "BaseBdev3", 00:14:23.262 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:23.262 "is_configured": true, 00:14:23.262 "data_offset": 2048, 00:14:23.262 "data_size": 63488 00:14:23.262 }, 00:14:23.262 { 00:14:23.262 "name": "BaseBdev4", 00:14:23.262 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:23.262 "is_configured": true, 00:14:23.262 "data_offset": 2048, 00:14:23.262 "data_size": 63488 00:14:23.262 } 00:14:23.262 ] 00:14:23.262 }' 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.262 23:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.831 23:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:23.831 23:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.831 23:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.831 [2024-12-06 23:48:35.093405] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:23.831 [2024-12-06 23:48:35.093461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.831 [2024-12-06 23:48:35.093490] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:23.831 [2024-12-06 23:48:35.093499] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.831 [2024-12-06 23:48:35.093961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.831 [2024-12-06 23:48:35.093979] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:23.831 [2024-12-06 23:48:35.094058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:23.831 [2024-12-06 23:48:35.094069] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:23.831 [2024-12-06 23:48:35.094079] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:23.831 [2024-12-06 23:48:35.094098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.831 [2024-12-06 23:48:35.108372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:23.831 spare 00:14:23.831 23:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.831 23:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:23.831 [2024-12-06 23:48:35.110229] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.771 "name": "raid_bdev1", 00:14:24.771 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:24.771 "strip_size_kb": 0, 00:14:24.771 "state": "online", 00:14:24.771 "raid_level": "raid1", 00:14:24.771 "superblock": true, 00:14:24.771 "num_base_bdevs": 4, 00:14:24.771 "num_base_bdevs_discovered": 3, 00:14:24.771 "num_base_bdevs_operational": 3, 00:14:24.771 "process": { 00:14:24.771 "type": "rebuild", 00:14:24.771 "target": "spare", 00:14:24.771 "progress": { 00:14:24.771 "blocks": 20480, 00:14:24.771 "percent": 32 00:14:24.771 } 00:14:24.771 }, 00:14:24.771 "base_bdevs_list": [ 00:14:24.771 { 00:14:24.771 "name": "spare", 00:14:24.771 "uuid": "208ce432-b77a-5a0a-918c-e2a99094d8a3", 00:14:24.771 "is_configured": true, 00:14:24.771 "data_offset": 2048, 00:14:24.771 "data_size": 63488 00:14:24.771 }, 00:14:24.771 { 00:14:24.771 "name": null, 00:14:24.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.771 "is_configured": false, 00:14:24.771 "data_offset": 2048, 00:14:24.771 "data_size": 63488 00:14:24.771 }, 00:14:24.771 { 00:14:24.771 "name": "BaseBdev3", 00:14:24.771 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:24.771 "is_configured": true, 00:14:24.771 "data_offset": 2048, 00:14:24.771 "data_size": 63488 00:14:24.771 }, 00:14:24.771 { 00:14:24.771 "name": "BaseBdev4", 00:14:24.771 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:24.771 "is_configured": true, 00:14:24.771 "data_offset": 2048, 00:14:24.771 "data_size": 63488 00:14:24.771 } 00:14:24.771 ] 00:14:24.771 }' 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.771 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.771 [2024-12-06 23:48:36.273999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.771 [2024-12-06 23:48:36.314832] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:24.771 [2024-12-06 23:48:36.314893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.771 [2024-12-06 23:48:36.314909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.771 [2024-12-06 23:48:36.314918] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.031 "name": "raid_bdev1", 00:14:25.031 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:25.031 "strip_size_kb": 0, 00:14:25.031 "state": "online", 00:14:25.031 "raid_level": "raid1", 00:14:25.031 "superblock": true, 00:14:25.031 "num_base_bdevs": 4, 00:14:25.031 "num_base_bdevs_discovered": 2, 00:14:25.031 "num_base_bdevs_operational": 2, 00:14:25.031 "base_bdevs_list": [ 00:14:25.031 { 00:14:25.031 "name": null, 00:14:25.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.031 "is_configured": false, 00:14:25.031 "data_offset": 0, 00:14:25.031 "data_size": 63488 00:14:25.031 }, 00:14:25.031 { 00:14:25.031 "name": null, 00:14:25.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.031 "is_configured": false, 00:14:25.031 "data_offset": 2048, 00:14:25.031 "data_size": 63488 00:14:25.031 }, 00:14:25.031 { 00:14:25.031 "name": "BaseBdev3", 00:14:25.031 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:25.031 "is_configured": true, 00:14:25.031 "data_offset": 2048, 00:14:25.031 "data_size": 63488 00:14:25.031 }, 00:14:25.031 { 00:14:25.031 "name": "BaseBdev4", 00:14:25.031 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:25.031 "is_configured": true, 00:14:25.031 "data_offset": 2048, 00:14:25.031 "data_size": 63488 00:14:25.031 } 00:14:25.031 ] 00:14:25.031 }' 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.031 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.290 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.290 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.290 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.290 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.290 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.290 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.290 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.290 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.290 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.290 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.290 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.290 "name": "raid_bdev1", 00:14:25.290 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:25.290 "strip_size_kb": 0, 00:14:25.290 "state": "online", 00:14:25.290 "raid_level": "raid1", 00:14:25.290 "superblock": true, 00:14:25.290 "num_base_bdevs": 4, 00:14:25.290 "num_base_bdevs_discovered": 2, 00:14:25.290 "num_base_bdevs_operational": 2, 00:14:25.290 "base_bdevs_list": [ 00:14:25.290 { 00:14:25.290 "name": null, 00:14:25.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.290 "is_configured": false, 00:14:25.290 "data_offset": 0, 00:14:25.290 "data_size": 63488 00:14:25.290 }, 00:14:25.290 { 00:14:25.290 "name": null, 00:14:25.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.290 "is_configured": false, 00:14:25.290 "data_offset": 2048, 00:14:25.290 "data_size": 63488 00:14:25.290 }, 00:14:25.290 { 00:14:25.290 "name": "BaseBdev3", 00:14:25.290 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:25.290 "is_configured": true, 00:14:25.290 "data_offset": 2048, 00:14:25.290 "data_size": 63488 00:14:25.290 }, 00:14:25.290 { 00:14:25.290 "name": "BaseBdev4", 00:14:25.290 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:25.290 "is_configured": true, 00:14:25.290 "data_offset": 2048, 00:14:25.290 "data_size": 63488 00:14:25.290 } 00:14:25.290 ] 00:14:25.290 }' 00:14:25.290 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.550 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.550 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.550 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.550 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:25.550 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.550 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.550 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.550 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:25.550 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.550 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.550 [2024-12-06 23:48:36.937418] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:25.550 [2024-12-06 23:48:36.937468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.550 [2024-12-06 23:48:36.937486] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:25.550 [2024-12-06 23:48:36.937496] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.550 [2024-12-06 23:48:36.937938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.550 [2024-12-06 23:48:36.937959] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:25.550 [2024-12-06 23:48:36.938028] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:25.550 [2024-12-06 23:48:36.938044] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:25.550 [2024-12-06 23:48:36.938051] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:25.550 [2024-12-06 23:48:36.938065] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:25.550 BaseBdev1 00:14:25.550 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.550 23:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.490 23:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.490 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.490 "name": "raid_bdev1", 00:14:26.490 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:26.490 "strip_size_kb": 0, 00:14:26.490 "state": "online", 00:14:26.490 "raid_level": "raid1", 00:14:26.490 "superblock": true, 00:14:26.490 "num_base_bdevs": 4, 00:14:26.490 "num_base_bdevs_discovered": 2, 00:14:26.490 "num_base_bdevs_operational": 2, 00:14:26.490 "base_bdevs_list": [ 00:14:26.490 { 00:14:26.490 "name": null, 00:14:26.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.490 "is_configured": false, 00:14:26.490 "data_offset": 0, 00:14:26.490 "data_size": 63488 00:14:26.490 }, 00:14:26.490 { 00:14:26.490 "name": null, 00:14:26.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.490 "is_configured": false, 00:14:26.490 "data_offset": 2048, 00:14:26.490 "data_size": 63488 00:14:26.490 }, 00:14:26.490 { 00:14:26.490 "name": "BaseBdev3", 00:14:26.490 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:26.490 "is_configured": true, 00:14:26.490 "data_offset": 2048, 00:14:26.490 "data_size": 63488 00:14:26.490 }, 00:14:26.490 { 00:14:26.490 "name": "BaseBdev4", 00:14:26.490 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:26.490 "is_configured": true, 00:14:26.490 "data_offset": 2048, 00:14:26.490 "data_size": 63488 00:14:26.490 } 00:14:26.490 ] 00:14:26.490 }' 00:14:26.490 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.490 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.061 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.061 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.061 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.061 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.061 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.061 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.061 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.061 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.061 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.061 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.061 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.061 "name": "raid_bdev1", 00:14:27.061 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:27.061 "strip_size_kb": 0, 00:14:27.061 "state": "online", 00:14:27.061 "raid_level": "raid1", 00:14:27.061 "superblock": true, 00:14:27.061 "num_base_bdevs": 4, 00:14:27.061 "num_base_bdevs_discovered": 2, 00:14:27.061 "num_base_bdevs_operational": 2, 00:14:27.061 "base_bdevs_list": [ 00:14:27.061 { 00:14:27.061 "name": null, 00:14:27.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.061 "is_configured": false, 00:14:27.061 "data_offset": 0, 00:14:27.061 "data_size": 63488 00:14:27.061 }, 00:14:27.061 { 00:14:27.061 "name": null, 00:14:27.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.061 "is_configured": false, 00:14:27.061 "data_offset": 2048, 00:14:27.061 "data_size": 63488 00:14:27.061 }, 00:14:27.061 { 00:14:27.061 "name": "BaseBdev3", 00:14:27.061 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:27.061 "is_configured": true, 00:14:27.061 "data_offset": 2048, 00:14:27.061 "data_size": 63488 00:14:27.061 }, 00:14:27.062 { 00:14:27.062 "name": "BaseBdev4", 00:14:27.062 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:27.062 "is_configured": true, 00:14:27.062 "data_offset": 2048, 00:14:27.062 "data_size": 63488 00:14:27.062 } 00:14:27.062 ] 00:14:27.062 }' 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.062 [2024-12-06 23:48:38.499153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.062 [2024-12-06 23:48:38.499300] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:27.062 [2024-12-06 23:48:38.499312] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:27.062 request: 00:14:27.062 { 00:14:27.062 "base_bdev": "BaseBdev1", 00:14:27.062 "raid_bdev": "raid_bdev1", 00:14:27.062 "method": "bdev_raid_add_base_bdev", 00:14:27.062 "req_id": 1 00:14:27.062 } 00:14:27.062 Got JSON-RPC error response 00:14:27.062 response: 00:14:27.062 { 00:14:27.062 "code": -22, 00:14:27.062 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:27.062 } 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.062 23:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.000 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.260 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.260 "name": "raid_bdev1", 00:14:28.260 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:28.260 "strip_size_kb": 0, 00:14:28.260 "state": "online", 00:14:28.260 "raid_level": "raid1", 00:14:28.260 "superblock": true, 00:14:28.260 "num_base_bdevs": 4, 00:14:28.260 "num_base_bdevs_discovered": 2, 00:14:28.260 "num_base_bdevs_operational": 2, 00:14:28.260 "base_bdevs_list": [ 00:14:28.260 { 00:14:28.260 "name": null, 00:14:28.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.260 "is_configured": false, 00:14:28.260 "data_offset": 0, 00:14:28.260 "data_size": 63488 00:14:28.260 }, 00:14:28.260 { 00:14:28.260 "name": null, 00:14:28.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.260 "is_configured": false, 00:14:28.260 "data_offset": 2048, 00:14:28.260 "data_size": 63488 00:14:28.260 }, 00:14:28.260 { 00:14:28.260 "name": "BaseBdev3", 00:14:28.260 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:28.260 "is_configured": true, 00:14:28.260 "data_offset": 2048, 00:14:28.260 "data_size": 63488 00:14:28.260 }, 00:14:28.260 { 00:14:28.260 "name": "BaseBdev4", 00:14:28.260 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:28.260 "is_configured": true, 00:14:28.260 "data_offset": 2048, 00:14:28.260 "data_size": 63488 00:14:28.260 } 00:14:28.260 ] 00:14:28.260 }' 00:14:28.260 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.260 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.520 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.520 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.520 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.520 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.520 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.520 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.520 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.520 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.520 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.520 23:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.520 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.520 "name": "raid_bdev1", 00:14:28.520 "uuid": "e8c3d202-a077-4648-bc62-cb0c612506da", 00:14:28.520 "strip_size_kb": 0, 00:14:28.520 "state": "online", 00:14:28.520 "raid_level": "raid1", 00:14:28.520 "superblock": true, 00:14:28.520 "num_base_bdevs": 4, 00:14:28.520 "num_base_bdevs_discovered": 2, 00:14:28.520 "num_base_bdevs_operational": 2, 00:14:28.520 "base_bdevs_list": [ 00:14:28.520 { 00:14:28.520 "name": null, 00:14:28.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.520 "is_configured": false, 00:14:28.520 "data_offset": 0, 00:14:28.520 "data_size": 63488 00:14:28.520 }, 00:14:28.520 { 00:14:28.520 "name": null, 00:14:28.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.520 "is_configured": false, 00:14:28.520 "data_offset": 2048, 00:14:28.520 "data_size": 63488 00:14:28.520 }, 00:14:28.520 { 00:14:28.520 "name": "BaseBdev3", 00:14:28.520 "uuid": "b0c0ed55-d8bb-52a3-8205-8a85d4688ac0", 00:14:28.520 "is_configured": true, 00:14:28.520 "data_offset": 2048, 00:14:28.520 "data_size": 63488 00:14:28.520 }, 00:14:28.520 { 00:14:28.520 "name": "BaseBdev4", 00:14:28.520 "uuid": "7ef038d7-78de-5395-ad06-fc1188db2744", 00:14:28.520 "is_configured": true, 00:14:28.520 "data_offset": 2048, 00:14:28.520 "data_size": 63488 00:14:28.520 } 00:14:28.520 ] 00:14:28.520 }' 00:14:28.520 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.520 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.520 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.780 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.780 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79078 00:14:28.780 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79078 ']' 00:14:28.780 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79078 00:14:28.780 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:28.780 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.780 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79078 00:14:28.780 killing process with pid 79078 00:14:28.780 Received shutdown signal, test time was about 18.082109 seconds 00:14:28.780 00:14:28.780 Latency(us) 00:14:28.780 [2024-12-06T23:48:40.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.780 [2024-12-06T23:48:40.343Z] =================================================================================================================== 00:14:28.780 [2024-12-06T23:48:40.343Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:28.780 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.780 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.780 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79078' 00:14:28.780 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79078 00:14:28.780 [2024-12-06 23:48:40.152819] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:28.780 [2024-12-06 23:48:40.152923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.780 23:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79078 00:14:28.780 [2024-12-06 23:48:40.152997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.780 [2024-12-06 23:48:40.153006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:29.039 [2024-12-06 23:48:40.540351] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:30.431 23:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:30.431 00:14:30.431 real 0m21.434s 00:14:30.431 user 0m28.034s 00:14:30.431 sys 0m2.693s 00:14:30.431 ************************************ 00:14:30.431 END TEST raid_rebuild_test_sb_io 00:14:30.431 ************************************ 00:14:30.431 23:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.431 23:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.431 23:48:41 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:30.431 23:48:41 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:30.431 23:48:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:30.431 23:48:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.431 23:48:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:30.431 ************************************ 00:14:30.431 START TEST raid5f_state_function_test 00:14:30.431 ************************************ 00:14:30.431 23:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:30.431 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79804 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:30.432 Process raid pid: 79804 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79804' 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79804 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79804 ']' 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.432 23:48:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.432 [2024-12-06 23:48:41.836364] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:14:30.432 [2024-12-06 23:48:41.836533] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.691 [2024-12-06 23:48:42.013548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.691 [2024-12-06 23:48:42.115240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.951 [2024-12-06 23:48:42.293834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.951 [2024-12-06 23:48:42.293866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.212 [2024-12-06 23:48:42.653183] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.212 [2024-12-06 23:48:42.653317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.212 [2024-12-06 23:48:42.653332] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.212 [2024-12-06 23:48:42.653342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.212 [2024-12-06 23:48:42.653348] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.212 [2024-12-06 23:48:42.653356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.212 "name": "Existed_Raid", 00:14:31.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.212 "strip_size_kb": 64, 00:14:31.212 "state": "configuring", 00:14:31.212 "raid_level": "raid5f", 00:14:31.212 "superblock": false, 00:14:31.212 "num_base_bdevs": 3, 00:14:31.212 "num_base_bdevs_discovered": 0, 00:14:31.212 "num_base_bdevs_operational": 3, 00:14:31.212 "base_bdevs_list": [ 00:14:31.212 { 00:14:31.212 "name": "BaseBdev1", 00:14:31.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.212 "is_configured": false, 00:14:31.212 "data_offset": 0, 00:14:31.212 "data_size": 0 00:14:31.212 }, 00:14:31.212 { 00:14:31.212 "name": "BaseBdev2", 00:14:31.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.212 "is_configured": false, 00:14:31.212 "data_offset": 0, 00:14:31.212 "data_size": 0 00:14:31.212 }, 00:14:31.212 { 00:14:31.212 "name": "BaseBdev3", 00:14:31.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.212 "is_configured": false, 00:14:31.212 "data_offset": 0, 00:14:31.212 "data_size": 0 00:14:31.212 } 00:14:31.212 ] 00:14:31.212 }' 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.212 23:48:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.794 [2024-12-06 23:48:43.152315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.794 [2024-12-06 23:48:43.152396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.794 [2024-12-06 23:48:43.164288] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.794 [2024-12-06 23:48:43.164384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.794 [2024-12-06 23:48:43.164413] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.794 [2024-12-06 23:48:43.164436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.794 [2024-12-06 23:48:43.164453] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.794 [2024-12-06 23:48:43.164473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.794 [2024-12-06 23:48:43.210980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.794 BaseBdev1 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.794 [ 00:14:31.794 { 00:14:31.794 "name": "BaseBdev1", 00:14:31.794 "aliases": [ 00:14:31.794 "9d9162e8-f1eb-43cc-918d-3986ddb95827" 00:14:31.794 ], 00:14:31.794 "product_name": "Malloc disk", 00:14:31.794 "block_size": 512, 00:14:31.794 "num_blocks": 65536, 00:14:31.794 "uuid": "9d9162e8-f1eb-43cc-918d-3986ddb95827", 00:14:31.794 "assigned_rate_limits": { 00:14:31.794 "rw_ios_per_sec": 0, 00:14:31.794 "rw_mbytes_per_sec": 0, 00:14:31.794 "r_mbytes_per_sec": 0, 00:14:31.794 "w_mbytes_per_sec": 0 00:14:31.794 }, 00:14:31.794 "claimed": true, 00:14:31.794 "claim_type": "exclusive_write", 00:14:31.794 "zoned": false, 00:14:31.794 "supported_io_types": { 00:14:31.794 "read": true, 00:14:31.794 "write": true, 00:14:31.794 "unmap": true, 00:14:31.794 "flush": true, 00:14:31.794 "reset": true, 00:14:31.794 "nvme_admin": false, 00:14:31.794 "nvme_io": false, 00:14:31.794 "nvme_io_md": false, 00:14:31.794 "write_zeroes": true, 00:14:31.794 "zcopy": true, 00:14:31.794 "get_zone_info": false, 00:14:31.794 "zone_management": false, 00:14:31.794 "zone_append": false, 00:14:31.794 "compare": false, 00:14:31.794 "compare_and_write": false, 00:14:31.794 "abort": true, 00:14:31.794 "seek_hole": false, 00:14:31.794 "seek_data": false, 00:14:31.794 "copy": true, 00:14:31.794 "nvme_iov_md": false 00:14:31.794 }, 00:14:31.794 "memory_domains": [ 00:14:31.794 { 00:14:31.794 "dma_device_id": "system", 00:14:31.794 "dma_device_type": 1 00:14:31.794 }, 00:14:31.794 { 00:14:31.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.794 "dma_device_type": 2 00:14:31.794 } 00:14:31.794 ], 00:14:31.794 "driver_specific": {} 00:14:31.794 } 00:14:31.794 ] 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:31.794 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.795 "name": "Existed_Raid", 00:14:31.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.795 "strip_size_kb": 64, 00:14:31.795 "state": "configuring", 00:14:31.795 "raid_level": "raid5f", 00:14:31.795 "superblock": false, 00:14:31.795 "num_base_bdevs": 3, 00:14:31.795 "num_base_bdevs_discovered": 1, 00:14:31.795 "num_base_bdevs_operational": 3, 00:14:31.795 "base_bdevs_list": [ 00:14:31.795 { 00:14:31.795 "name": "BaseBdev1", 00:14:31.795 "uuid": "9d9162e8-f1eb-43cc-918d-3986ddb95827", 00:14:31.795 "is_configured": true, 00:14:31.795 "data_offset": 0, 00:14:31.795 "data_size": 65536 00:14:31.795 }, 00:14:31.795 { 00:14:31.795 "name": "BaseBdev2", 00:14:31.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.795 "is_configured": false, 00:14:31.795 "data_offset": 0, 00:14:31.795 "data_size": 0 00:14:31.795 }, 00:14:31.795 { 00:14:31.795 "name": "BaseBdev3", 00:14:31.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.795 "is_configured": false, 00:14:31.795 "data_offset": 0, 00:14:31.795 "data_size": 0 00:14:31.795 } 00:14:31.795 ] 00:14:31.795 }' 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.795 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.363 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:32.363 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.363 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.363 [2024-12-06 23:48:43.726122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.363 [2024-12-06 23:48:43.726158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:32.363 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.363 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:32.363 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.363 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.363 [2024-12-06 23:48:43.738148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.363 [2024-12-06 23:48:43.739801] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.363 [2024-12-06 23:48:43.739832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.363 [2024-12-06 23:48:43.739841] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:32.363 [2024-12-06 23:48:43.739849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:32.363 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.364 "name": "Existed_Raid", 00:14:32.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.364 "strip_size_kb": 64, 00:14:32.364 "state": "configuring", 00:14:32.364 "raid_level": "raid5f", 00:14:32.364 "superblock": false, 00:14:32.364 "num_base_bdevs": 3, 00:14:32.364 "num_base_bdevs_discovered": 1, 00:14:32.364 "num_base_bdevs_operational": 3, 00:14:32.364 "base_bdevs_list": [ 00:14:32.364 { 00:14:32.364 "name": "BaseBdev1", 00:14:32.364 "uuid": "9d9162e8-f1eb-43cc-918d-3986ddb95827", 00:14:32.364 "is_configured": true, 00:14:32.364 "data_offset": 0, 00:14:32.364 "data_size": 65536 00:14:32.364 }, 00:14:32.364 { 00:14:32.364 "name": "BaseBdev2", 00:14:32.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.364 "is_configured": false, 00:14:32.364 "data_offset": 0, 00:14:32.364 "data_size": 0 00:14:32.364 }, 00:14:32.364 { 00:14:32.364 "name": "BaseBdev3", 00:14:32.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.364 "is_configured": false, 00:14:32.364 "data_offset": 0, 00:14:32.364 "data_size": 0 00:14:32.364 } 00:14:32.364 ] 00:14:32.364 }' 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.364 23:48:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.624 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:32.624 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.624 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.884 [2024-12-06 23:48:44.218003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.884 BaseBdev2 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.884 [ 00:14:32.884 { 00:14:32.884 "name": "BaseBdev2", 00:14:32.884 "aliases": [ 00:14:32.884 "5088b7eb-b41e-41b9-a0ee-4fa2c6d7d414" 00:14:32.884 ], 00:14:32.884 "product_name": "Malloc disk", 00:14:32.884 "block_size": 512, 00:14:32.884 "num_blocks": 65536, 00:14:32.884 "uuid": "5088b7eb-b41e-41b9-a0ee-4fa2c6d7d414", 00:14:32.884 "assigned_rate_limits": { 00:14:32.884 "rw_ios_per_sec": 0, 00:14:32.884 "rw_mbytes_per_sec": 0, 00:14:32.884 "r_mbytes_per_sec": 0, 00:14:32.884 "w_mbytes_per_sec": 0 00:14:32.884 }, 00:14:32.884 "claimed": true, 00:14:32.884 "claim_type": "exclusive_write", 00:14:32.884 "zoned": false, 00:14:32.884 "supported_io_types": { 00:14:32.884 "read": true, 00:14:32.884 "write": true, 00:14:32.884 "unmap": true, 00:14:32.884 "flush": true, 00:14:32.884 "reset": true, 00:14:32.884 "nvme_admin": false, 00:14:32.884 "nvme_io": false, 00:14:32.884 "nvme_io_md": false, 00:14:32.884 "write_zeroes": true, 00:14:32.884 "zcopy": true, 00:14:32.884 "get_zone_info": false, 00:14:32.884 "zone_management": false, 00:14:32.884 "zone_append": false, 00:14:32.884 "compare": false, 00:14:32.884 "compare_and_write": false, 00:14:32.884 "abort": true, 00:14:32.884 "seek_hole": false, 00:14:32.884 "seek_data": false, 00:14:32.884 "copy": true, 00:14:32.884 "nvme_iov_md": false 00:14:32.884 }, 00:14:32.884 "memory_domains": [ 00:14:32.884 { 00:14:32.884 "dma_device_id": "system", 00:14:32.884 "dma_device_type": 1 00:14:32.884 }, 00:14:32.884 { 00:14:32.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.884 "dma_device_type": 2 00:14:32.884 } 00:14:32.884 ], 00:14:32.884 "driver_specific": {} 00:14:32.884 } 00:14:32.884 ] 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.884 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.884 "name": "Existed_Raid", 00:14:32.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.884 "strip_size_kb": 64, 00:14:32.884 "state": "configuring", 00:14:32.884 "raid_level": "raid5f", 00:14:32.884 "superblock": false, 00:14:32.884 "num_base_bdevs": 3, 00:14:32.884 "num_base_bdevs_discovered": 2, 00:14:32.884 "num_base_bdevs_operational": 3, 00:14:32.884 "base_bdevs_list": [ 00:14:32.884 { 00:14:32.884 "name": "BaseBdev1", 00:14:32.884 "uuid": "9d9162e8-f1eb-43cc-918d-3986ddb95827", 00:14:32.884 "is_configured": true, 00:14:32.884 "data_offset": 0, 00:14:32.884 "data_size": 65536 00:14:32.884 }, 00:14:32.884 { 00:14:32.884 "name": "BaseBdev2", 00:14:32.884 "uuid": "5088b7eb-b41e-41b9-a0ee-4fa2c6d7d414", 00:14:32.884 "is_configured": true, 00:14:32.884 "data_offset": 0, 00:14:32.885 "data_size": 65536 00:14:32.885 }, 00:14:32.885 { 00:14:32.885 "name": "BaseBdev3", 00:14:32.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.885 "is_configured": false, 00:14:32.885 "data_offset": 0, 00:14:32.885 "data_size": 0 00:14:32.885 } 00:14:32.885 ] 00:14:32.885 }' 00:14:32.885 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.885 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.145 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:33.145 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.145 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.145 [2024-12-06 23:48:44.706150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.145 [2024-12-06 23:48:44.706289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:33.145 [2024-12-06 23:48:44.706312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:33.145 [2024-12-06 23:48:44.706604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:33.406 [2024-12-06 23:48:44.711736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:33.406 [2024-12-06 23:48:44.711756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:33.406 [2024-12-06 23:48:44.712019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.406 BaseBdev3 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.406 [ 00:14:33.406 { 00:14:33.406 "name": "BaseBdev3", 00:14:33.406 "aliases": [ 00:14:33.406 "0eb1b4ab-82d5-43ca-9441-bd44f48eecdd" 00:14:33.406 ], 00:14:33.406 "product_name": "Malloc disk", 00:14:33.406 "block_size": 512, 00:14:33.406 "num_blocks": 65536, 00:14:33.406 "uuid": "0eb1b4ab-82d5-43ca-9441-bd44f48eecdd", 00:14:33.406 "assigned_rate_limits": { 00:14:33.406 "rw_ios_per_sec": 0, 00:14:33.406 "rw_mbytes_per_sec": 0, 00:14:33.406 "r_mbytes_per_sec": 0, 00:14:33.406 "w_mbytes_per_sec": 0 00:14:33.406 }, 00:14:33.406 "claimed": true, 00:14:33.406 "claim_type": "exclusive_write", 00:14:33.406 "zoned": false, 00:14:33.406 "supported_io_types": { 00:14:33.406 "read": true, 00:14:33.406 "write": true, 00:14:33.406 "unmap": true, 00:14:33.406 "flush": true, 00:14:33.406 "reset": true, 00:14:33.406 "nvme_admin": false, 00:14:33.406 "nvme_io": false, 00:14:33.406 "nvme_io_md": false, 00:14:33.406 "write_zeroes": true, 00:14:33.406 "zcopy": true, 00:14:33.406 "get_zone_info": false, 00:14:33.406 "zone_management": false, 00:14:33.406 "zone_append": false, 00:14:33.406 "compare": false, 00:14:33.406 "compare_and_write": false, 00:14:33.406 "abort": true, 00:14:33.406 "seek_hole": false, 00:14:33.406 "seek_data": false, 00:14:33.406 "copy": true, 00:14:33.406 "nvme_iov_md": false 00:14:33.406 }, 00:14:33.406 "memory_domains": [ 00:14:33.406 { 00:14:33.406 "dma_device_id": "system", 00:14:33.406 "dma_device_type": 1 00:14:33.406 }, 00:14:33.406 { 00:14:33.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.406 "dma_device_type": 2 00:14:33.406 } 00:14:33.406 ], 00:14:33.406 "driver_specific": {} 00:14:33.406 } 00:14:33.406 ] 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.406 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.406 "name": "Existed_Raid", 00:14:33.406 "uuid": "5788318a-fa29-42d4-a06b-f43fd4380e9f", 00:14:33.406 "strip_size_kb": 64, 00:14:33.406 "state": "online", 00:14:33.406 "raid_level": "raid5f", 00:14:33.406 "superblock": false, 00:14:33.406 "num_base_bdevs": 3, 00:14:33.406 "num_base_bdevs_discovered": 3, 00:14:33.406 "num_base_bdevs_operational": 3, 00:14:33.406 "base_bdevs_list": [ 00:14:33.406 { 00:14:33.406 "name": "BaseBdev1", 00:14:33.406 "uuid": "9d9162e8-f1eb-43cc-918d-3986ddb95827", 00:14:33.406 "is_configured": true, 00:14:33.406 "data_offset": 0, 00:14:33.406 "data_size": 65536 00:14:33.406 }, 00:14:33.406 { 00:14:33.406 "name": "BaseBdev2", 00:14:33.406 "uuid": "5088b7eb-b41e-41b9-a0ee-4fa2c6d7d414", 00:14:33.406 "is_configured": true, 00:14:33.406 "data_offset": 0, 00:14:33.406 "data_size": 65536 00:14:33.406 }, 00:14:33.406 { 00:14:33.406 "name": "BaseBdev3", 00:14:33.406 "uuid": "0eb1b4ab-82d5-43ca-9441-bd44f48eecdd", 00:14:33.406 "is_configured": true, 00:14:33.406 "data_offset": 0, 00:14:33.406 "data_size": 65536 00:14:33.406 } 00:14:33.406 ] 00:14:33.407 }' 00:14:33.407 23:48:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.407 23:48:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.667 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:33.667 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:33.667 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:33.667 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:33.667 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:33.667 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:33.667 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:33.667 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:33.667 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.667 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.667 [2024-12-06 23:48:45.197491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.667 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:33.926 "name": "Existed_Raid", 00:14:33.926 "aliases": [ 00:14:33.926 "5788318a-fa29-42d4-a06b-f43fd4380e9f" 00:14:33.926 ], 00:14:33.926 "product_name": "Raid Volume", 00:14:33.926 "block_size": 512, 00:14:33.926 "num_blocks": 131072, 00:14:33.926 "uuid": "5788318a-fa29-42d4-a06b-f43fd4380e9f", 00:14:33.926 "assigned_rate_limits": { 00:14:33.926 "rw_ios_per_sec": 0, 00:14:33.926 "rw_mbytes_per_sec": 0, 00:14:33.926 "r_mbytes_per_sec": 0, 00:14:33.926 "w_mbytes_per_sec": 0 00:14:33.926 }, 00:14:33.926 "claimed": false, 00:14:33.926 "zoned": false, 00:14:33.926 "supported_io_types": { 00:14:33.926 "read": true, 00:14:33.926 "write": true, 00:14:33.926 "unmap": false, 00:14:33.926 "flush": false, 00:14:33.926 "reset": true, 00:14:33.926 "nvme_admin": false, 00:14:33.926 "nvme_io": false, 00:14:33.926 "nvme_io_md": false, 00:14:33.926 "write_zeroes": true, 00:14:33.926 "zcopy": false, 00:14:33.926 "get_zone_info": false, 00:14:33.926 "zone_management": false, 00:14:33.926 "zone_append": false, 00:14:33.926 "compare": false, 00:14:33.926 "compare_and_write": false, 00:14:33.926 "abort": false, 00:14:33.926 "seek_hole": false, 00:14:33.926 "seek_data": false, 00:14:33.926 "copy": false, 00:14:33.926 "nvme_iov_md": false 00:14:33.926 }, 00:14:33.926 "driver_specific": { 00:14:33.926 "raid": { 00:14:33.926 "uuid": "5788318a-fa29-42d4-a06b-f43fd4380e9f", 00:14:33.926 "strip_size_kb": 64, 00:14:33.926 "state": "online", 00:14:33.926 "raid_level": "raid5f", 00:14:33.926 "superblock": false, 00:14:33.926 "num_base_bdevs": 3, 00:14:33.926 "num_base_bdevs_discovered": 3, 00:14:33.926 "num_base_bdevs_operational": 3, 00:14:33.926 "base_bdevs_list": [ 00:14:33.926 { 00:14:33.926 "name": "BaseBdev1", 00:14:33.926 "uuid": "9d9162e8-f1eb-43cc-918d-3986ddb95827", 00:14:33.926 "is_configured": true, 00:14:33.926 "data_offset": 0, 00:14:33.926 "data_size": 65536 00:14:33.926 }, 00:14:33.926 { 00:14:33.926 "name": "BaseBdev2", 00:14:33.926 "uuid": "5088b7eb-b41e-41b9-a0ee-4fa2c6d7d414", 00:14:33.926 "is_configured": true, 00:14:33.926 "data_offset": 0, 00:14:33.926 "data_size": 65536 00:14:33.926 }, 00:14:33.926 { 00:14:33.926 "name": "BaseBdev3", 00:14:33.926 "uuid": "0eb1b4ab-82d5-43ca-9441-bd44f48eecdd", 00:14:33.926 "is_configured": true, 00:14:33.926 "data_offset": 0, 00:14:33.926 "data_size": 65536 00:14:33.926 } 00:14:33.926 ] 00:14:33.926 } 00:14:33.926 } 00:14:33.926 }' 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:33.926 BaseBdev2 00:14:33.926 BaseBdev3' 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.926 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.185 [2024-12-06 23:48:45.492807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.185 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.186 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.186 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.186 "name": "Existed_Raid", 00:14:34.186 "uuid": "5788318a-fa29-42d4-a06b-f43fd4380e9f", 00:14:34.186 "strip_size_kb": 64, 00:14:34.186 "state": "online", 00:14:34.186 "raid_level": "raid5f", 00:14:34.186 "superblock": false, 00:14:34.186 "num_base_bdevs": 3, 00:14:34.186 "num_base_bdevs_discovered": 2, 00:14:34.186 "num_base_bdevs_operational": 2, 00:14:34.186 "base_bdevs_list": [ 00:14:34.186 { 00:14:34.186 "name": null, 00:14:34.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.186 "is_configured": false, 00:14:34.186 "data_offset": 0, 00:14:34.186 "data_size": 65536 00:14:34.186 }, 00:14:34.186 { 00:14:34.186 "name": "BaseBdev2", 00:14:34.186 "uuid": "5088b7eb-b41e-41b9-a0ee-4fa2c6d7d414", 00:14:34.186 "is_configured": true, 00:14:34.186 "data_offset": 0, 00:14:34.186 "data_size": 65536 00:14:34.186 }, 00:14:34.186 { 00:14:34.186 "name": "BaseBdev3", 00:14:34.186 "uuid": "0eb1b4ab-82d5-43ca-9441-bd44f48eecdd", 00:14:34.186 "is_configured": true, 00:14:34.186 "data_offset": 0, 00:14:34.186 "data_size": 65536 00:14:34.186 } 00:14:34.186 ] 00:14:34.186 }' 00:14:34.186 23:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.186 23:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.782 [2024-12-06 23:48:46.092397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:34.782 [2024-12-06 23:48:46.092490] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.782 [2024-12-06 23:48:46.183284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.782 [2024-12-06 23:48:46.243228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:34.782 [2024-12-06 23:48:46.243324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:34.782 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.120 BaseBdev2 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.120 [ 00:14:35.120 { 00:14:35.120 "name": "BaseBdev2", 00:14:35.120 "aliases": [ 00:14:35.120 "c88c4746-e093-45fb-8ed7-125fe4362b56" 00:14:35.120 ], 00:14:35.120 "product_name": "Malloc disk", 00:14:35.120 "block_size": 512, 00:14:35.120 "num_blocks": 65536, 00:14:35.120 "uuid": "c88c4746-e093-45fb-8ed7-125fe4362b56", 00:14:35.120 "assigned_rate_limits": { 00:14:35.120 "rw_ios_per_sec": 0, 00:14:35.120 "rw_mbytes_per_sec": 0, 00:14:35.120 "r_mbytes_per_sec": 0, 00:14:35.120 "w_mbytes_per_sec": 0 00:14:35.120 }, 00:14:35.120 "claimed": false, 00:14:35.120 "zoned": false, 00:14:35.120 "supported_io_types": { 00:14:35.120 "read": true, 00:14:35.120 "write": true, 00:14:35.120 "unmap": true, 00:14:35.120 "flush": true, 00:14:35.120 "reset": true, 00:14:35.120 "nvme_admin": false, 00:14:35.120 "nvme_io": false, 00:14:35.120 "nvme_io_md": false, 00:14:35.120 "write_zeroes": true, 00:14:35.120 "zcopy": true, 00:14:35.120 "get_zone_info": false, 00:14:35.120 "zone_management": false, 00:14:35.120 "zone_append": false, 00:14:35.120 "compare": false, 00:14:35.120 "compare_and_write": false, 00:14:35.120 "abort": true, 00:14:35.120 "seek_hole": false, 00:14:35.120 "seek_data": false, 00:14:35.120 "copy": true, 00:14:35.120 "nvme_iov_md": false 00:14:35.120 }, 00:14:35.120 "memory_domains": [ 00:14:35.120 { 00:14:35.120 "dma_device_id": "system", 00:14:35.120 "dma_device_type": 1 00:14:35.120 }, 00:14:35.120 { 00:14:35.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.120 "dma_device_type": 2 00:14:35.120 } 00:14:35.120 ], 00:14:35.120 "driver_specific": {} 00:14:35.120 } 00:14:35.120 ] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.120 BaseBdev3 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.120 [ 00:14:35.120 { 00:14:35.120 "name": "BaseBdev3", 00:14:35.120 "aliases": [ 00:14:35.120 "01a8972f-4fdd-40f6-ba40-4b7495e1377a" 00:14:35.120 ], 00:14:35.120 "product_name": "Malloc disk", 00:14:35.120 "block_size": 512, 00:14:35.120 "num_blocks": 65536, 00:14:35.120 "uuid": "01a8972f-4fdd-40f6-ba40-4b7495e1377a", 00:14:35.120 "assigned_rate_limits": { 00:14:35.120 "rw_ios_per_sec": 0, 00:14:35.120 "rw_mbytes_per_sec": 0, 00:14:35.120 "r_mbytes_per_sec": 0, 00:14:35.120 "w_mbytes_per_sec": 0 00:14:35.120 }, 00:14:35.120 "claimed": false, 00:14:35.120 "zoned": false, 00:14:35.120 "supported_io_types": { 00:14:35.120 "read": true, 00:14:35.120 "write": true, 00:14:35.120 "unmap": true, 00:14:35.120 "flush": true, 00:14:35.120 "reset": true, 00:14:35.120 "nvme_admin": false, 00:14:35.120 "nvme_io": false, 00:14:35.120 "nvme_io_md": false, 00:14:35.120 "write_zeroes": true, 00:14:35.120 "zcopy": true, 00:14:35.120 "get_zone_info": false, 00:14:35.120 "zone_management": false, 00:14:35.120 "zone_append": false, 00:14:35.120 "compare": false, 00:14:35.120 "compare_and_write": false, 00:14:35.120 "abort": true, 00:14:35.120 "seek_hole": false, 00:14:35.120 "seek_data": false, 00:14:35.120 "copy": true, 00:14:35.120 "nvme_iov_md": false 00:14:35.120 }, 00:14:35.120 "memory_domains": [ 00:14:35.120 { 00:14:35.120 "dma_device_id": "system", 00:14:35.120 "dma_device_type": 1 00:14:35.120 }, 00:14:35.120 { 00:14:35.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.120 "dma_device_type": 2 00:14:35.120 } 00:14:35.120 ], 00:14:35.120 "driver_specific": {} 00:14:35.120 } 00:14:35.120 ] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.120 [2024-12-06 23:48:46.540384] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.120 [2024-12-06 23:48:46.540485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.120 [2024-12-06 23:48:46.540533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.120 [2024-12-06 23:48:46.542298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.120 "name": "Existed_Raid", 00:14:35.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.120 "strip_size_kb": 64, 00:14:35.120 "state": "configuring", 00:14:35.120 "raid_level": "raid5f", 00:14:35.120 "superblock": false, 00:14:35.120 "num_base_bdevs": 3, 00:14:35.120 "num_base_bdevs_discovered": 2, 00:14:35.120 "num_base_bdevs_operational": 3, 00:14:35.120 "base_bdevs_list": [ 00:14:35.120 { 00:14:35.120 "name": "BaseBdev1", 00:14:35.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.120 "is_configured": false, 00:14:35.120 "data_offset": 0, 00:14:35.120 "data_size": 0 00:14:35.120 }, 00:14:35.120 { 00:14:35.120 "name": "BaseBdev2", 00:14:35.120 "uuid": "c88c4746-e093-45fb-8ed7-125fe4362b56", 00:14:35.120 "is_configured": true, 00:14:35.120 "data_offset": 0, 00:14:35.120 "data_size": 65536 00:14:35.120 }, 00:14:35.120 { 00:14:35.120 "name": "BaseBdev3", 00:14:35.120 "uuid": "01a8972f-4fdd-40f6-ba40-4b7495e1377a", 00:14:35.120 "is_configured": true, 00:14:35.120 "data_offset": 0, 00:14:35.120 "data_size": 65536 00:14:35.120 } 00:14:35.120 ] 00:14:35.120 }' 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.120 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.688 [2024-12-06 23:48:46.983658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.688 23:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.688 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.688 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.688 "name": "Existed_Raid", 00:14:35.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.688 "strip_size_kb": 64, 00:14:35.688 "state": "configuring", 00:14:35.688 "raid_level": "raid5f", 00:14:35.688 "superblock": false, 00:14:35.688 "num_base_bdevs": 3, 00:14:35.688 "num_base_bdevs_discovered": 1, 00:14:35.688 "num_base_bdevs_operational": 3, 00:14:35.688 "base_bdevs_list": [ 00:14:35.688 { 00:14:35.688 "name": "BaseBdev1", 00:14:35.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.688 "is_configured": false, 00:14:35.688 "data_offset": 0, 00:14:35.688 "data_size": 0 00:14:35.688 }, 00:14:35.688 { 00:14:35.688 "name": null, 00:14:35.688 "uuid": "c88c4746-e093-45fb-8ed7-125fe4362b56", 00:14:35.688 "is_configured": false, 00:14:35.688 "data_offset": 0, 00:14:35.688 "data_size": 65536 00:14:35.688 }, 00:14:35.688 { 00:14:35.688 "name": "BaseBdev3", 00:14:35.688 "uuid": "01a8972f-4fdd-40f6-ba40-4b7495e1377a", 00:14:35.688 "is_configured": true, 00:14:35.688 "data_offset": 0, 00:14:35.688 "data_size": 65536 00:14:35.688 } 00:14:35.688 ] 00:14:35.688 }' 00:14:35.688 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.688 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.948 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.948 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.948 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.948 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:35.948 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.207 [2024-12-06 23:48:47.545880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.207 BaseBdev1 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.207 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.207 [ 00:14:36.207 { 00:14:36.207 "name": "BaseBdev1", 00:14:36.207 "aliases": [ 00:14:36.207 "575144dc-ff9f-402e-a3f9-742bf1675ea7" 00:14:36.207 ], 00:14:36.207 "product_name": "Malloc disk", 00:14:36.207 "block_size": 512, 00:14:36.207 "num_blocks": 65536, 00:14:36.207 "uuid": "575144dc-ff9f-402e-a3f9-742bf1675ea7", 00:14:36.207 "assigned_rate_limits": { 00:14:36.207 "rw_ios_per_sec": 0, 00:14:36.207 "rw_mbytes_per_sec": 0, 00:14:36.207 "r_mbytes_per_sec": 0, 00:14:36.207 "w_mbytes_per_sec": 0 00:14:36.207 }, 00:14:36.207 "claimed": true, 00:14:36.207 "claim_type": "exclusive_write", 00:14:36.207 "zoned": false, 00:14:36.207 "supported_io_types": { 00:14:36.207 "read": true, 00:14:36.207 "write": true, 00:14:36.207 "unmap": true, 00:14:36.207 "flush": true, 00:14:36.207 "reset": true, 00:14:36.207 "nvme_admin": false, 00:14:36.207 "nvme_io": false, 00:14:36.207 "nvme_io_md": false, 00:14:36.207 "write_zeroes": true, 00:14:36.207 "zcopy": true, 00:14:36.207 "get_zone_info": false, 00:14:36.207 "zone_management": false, 00:14:36.207 "zone_append": false, 00:14:36.207 "compare": false, 00:14:36.207 "compare_and_write": false, 00:14:36.207 "abort": true, 00:14:36.207 "seek_hole": false, 00:14:36.207 "seek_data": false, 00:14:36.208 "copy": true, 00:14:36.208 "nvme_iov_md": false 00:14:36.208 }, 00:14:36.208 "memory_domains": [ 00:14:36.208 { 00:14:36.208 "dma_device_id": "system", 00:14:36.208 "dma_device_type": 1 00:14:36.208 }, 00:14:36.208 { 00:14:36.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.208 "dma_device_type": 2 00:14:36.208 } 00:14:36.208 ], 00:14:36.208 "driver_specific": {} 00:14:36.208 } 00:14:36.208 ] 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.208 "name": "Existed_Raid", 00:14:36.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.208 "strip_size_kb": 64, 00:14:36.208 "state": "configuring", 00:14:36.208 "raid_level": "raid5f", 00:14:36.208 "superblock": false, 00:14:36.208 "num_base_bdevs": 3, 00:14:36.208 "num_base_bdevs_discovered": 2, 00:14:36.208 "num_base_bdevs_operational": 3, 00:14:36.208 "base_bdevs_list": [ 00:14:36.208 { 00:14:36.208 "name": "BaseBdev1", 00:14:36.208 "uuid": "575144dc-ff9f-402e-a3f9-742bf1675ea7", 00:14:36.208 "is_configured": true, 00:14:36.208 "data_offset": 0, 00:14:36.208 "data_size": 65536 00:14:36.208 }, 00:14:36.208 { 00:14:36.208 "name": null, 00:14:36.208 "uuid": "c88c4746-e093-45fb-8ed7-125fe4362b56", 00:14:36.208 "is_configured": false, 00:14:36.208 "data_offset": 0, 00:14:36.208 "data_size": 65536 00:14:36.208 }, 00:14:36.208 { 00:14:36.208 "name": "BaseBdev3", 00:14:36.208 "uuid": "01a8972f-4fdd-40f6-ba40-4b7495e1377a", 00:14:36.208 "is_configured": true, 00:14:36.208 "data_offset": 0, 00:14:36.208 "data_size": 65536 00:14:36.208 } 00:14:36.208 ] 00:14:36.208 }' 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.208 23:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.777 [2024-12-06 23:48:48.093084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.777 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.777 "name": "Existed_Raid", 00:14:36.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.777 "strip_size_kb": 64, 00:14:36.777 "state": "configuring", 00:14:36.777 "raid_level": "raid5f", 00:14:36.777 "superblock": false, 00:14:36.777 "num_base_bdevs": 3, 00:14:36.777 "num_base_bdevs_discovered": 1, 00:14:36.777 "num_base_bdevs_operational": 3, 00:14:36.777 "base_bdevs_list": [ 00:14:36.777 { 00:14:36.777 "name": "BaseBdev1", 00:14:36.777 "uuid": "575144dc-ff9f-402e-a3f9-742bf1675ea7", 00:14:36.777 "is_configured": true, 00:14:36.777 "data_offset": 0, 00:14:36.777 "data_size": 65536 00:14:36.777 }, 00:14:36.777 { 00:14:36.777 "name": null, 00:14:36.777 "uuid": "c88c4746-e093-45fb-8ed7-125fe4362b56", 00:14:36.777 "is_configured": false, 00:14:36.777 "data_offset": 0, 00:14:36.777 "data_size": 65536 00:14:36.777 }, 00:14:36.777 { 00:14:36.777 "name": null, 00:14:36.778 "uuid": "01a8972f-4fdd-40f6-ba40-4b7495e1377a", 00:14:36.778 "is_configured": false, 00:14:36.778 "data_offset": 0, 00:14:36.778 "data_size": 65536 00:14:36.778 } 00:14:36.778 ] 00:14:36.778 }' 00:14:36.778 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.778 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.037 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.037 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.037 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.037 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:37.037 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.037 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:37.037 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:37.037 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.037 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.297 [2024-12-06 23:48:48.600230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.297 "name": "Existed_Raid", 00:14:37.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.297 "strip_size_kb": 64, 00:14:37.297 "state": "configuring", 00:14:37.297 "raid_level": "raid5f", 00:14:37.297 "superblock": false, 00:14:37.297 "num_base_bdevs": 3, 00:14:37.297 "num_base_bdevs_discovered": 2, 00:14:37.297 "num_base_bdevs_operational": 3, 00:14:37.297 "base_bdevs_list": [ 00:14:37.297 { 00:14:37.297 "name": "BaseBdev1", 00:14:37.297 "uuid": "575144dc-ff9f-402e-a3f9-742bf1675ea7", 00:14:37.297 "is_configured": true, 00:14:37.297 "data_offset": 0, 00:14:37.297 "data_size": 65536 00:14:37.297 }, 00:14:37.297 { 00:14:37.297 "name": null, 00:14:37.297 "uuid": "c88c4746-e093-45fb-8ed7-125fe4362b56", 00:14:37.297 "is_configured": false, 00:14:37.297 "data_offset": 0, 00:14:37.297 "data_size": 65536 00:14:37.297 }, 00:14:37.297 { 00:14:37.297 "name": "BaseBdev3", 00:14:37.297 "uuid": "01a8972f-4fdd-40f6-ba40-4b7495e1377a", 00:14:37.297 "is_configured": true, 00:14:37.297 "data_offset": 0, 00:14:37.297 "data_size": 65536 00:14:37.297 } 00:14:37.297 ] 00:14:37.297 }' 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.297 23:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.557 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:37.557 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.557 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.557 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.557 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.557 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:37.557 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:37.557 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.557 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.557 [2024-12-06 23:48:49.095501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.818 "name": "Existed_Raid", 00:14:37.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.818 "strip_size_kb": 64, 00:14:37.818 "state": "configuring", 00:14:37.818 "raid_level": "raid5f", 00:14:37.818 "superblock": false, 00:14:37.818 "num_base_bdevs": 3, 00:14:37.818 "num_base_bdevs_discovered": 1, 00:14:37.818 "num_base_bdevs_operational": 3, 00:14:37.818 "base_bdevs_list": [ 00:14:37.818 { 00:14:37.818 "name": null, 00:14:37.818 "uuid": "575144dc-ff9f-402e-a3f9-742bf1675ea7", 00:14:37.818 "is_configured": false, 00:14:37.818 "data_offset": 0, 00:14:37.818 "data_size": 65536 00:14:37.818 }, 00:14:37.818 { 00:14:37.818 "name": null, 00:14:37.818 "uuid": "c88c4746-e093-45fb-8ed7-125fe4362b56", 00:14:37.818 "is_configured": false, 00:14:37.818 "data_offset": 0, 00:14:37.818 "data_size": 65536 00:14:37.818 }, 00:14:37.818 { 00:14:37.818 "name": "BaseBdev3", 00:14:37.818 "uuid": "01a8972f-4fdd-40f6-ba40-4b7495e1377a", 00:14:37.818 "is_configured": true, 00:14:37.818 "data_offset": 0, 00:14:37.818 "data_size": 65536 00:14:37.818 } 00:14:37.818 ] 00:14:37.818 }' 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.818 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.078 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.078 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:38.078 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.078 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.339 [2024-12-06 23:48:49.682563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.339 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.339 "name": "Existed_Raid", 00:14:38.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.339 "strip_size_kb": 64, 00:14:38.339 "state": "configuring", 00:14:38.340 "raid_level": "raid5f", 00:14:38.340 "superblock": false, 00:14:38.340 "num_base_bdevs": 3, 00:14:38.340 "num_base_bdevs_discovered": 2, 00:14:38.340 "num_base_bdevs_operational": 3, 00:14:38.340 "base_bdevs_list": [ 00:14:38.340 { 00:14:38.340 "name": null, 00:14:38.340 "uuid": "575144dc-ff9f-402e-a3f9-742bf1675ea7", 00:14:38.340 "is_configured": false, 00:14:38.340 "data_offset": 0, 00:14:38.340 "data_size": 65536 00:14:38.340 }, 00:14:38.340 { 00:14:38.340 "name": "BaseBdev2", 00:14:38.340 "uuid": "c88c4746-e093-45fb-8ed7-125fe4362b56", 00:14:38.340 "is_configured": true, 00:14:38.340 "data_offset": 0, 00:14:38.340 "data_size": 65536 00:14:38.340 }, 00:14:38.340 { 00:14:38.340 "name": "BaseBdev3", 00:14:38.340 "uuid": "01a8972f-4fdd-40f6-ba40-4b7495e1377a", 00:14:38.340 "is_configured": true, 00:14:38.340 "data_offset": 0, 00:14:38.340 "data_size": 65536 00:14:38.340 } 00:14:38.340 ] 00:14:38.340 }' 00:14:38.340 23:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.340 23:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.599 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.599 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:38.599 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.599 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.599 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.599 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:38.599 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.599 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.599 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:38.599 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.858 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.858 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 575144dc-ff9f-402e-a3f9-742bf1675ea7 00:14:38.858 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.859 [2024-12-06 23:48:50.232038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:38.859 [2024-12-06 23:48:50.232143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:38.859 [2024-12-06 23:48:50.232169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:38.859 [2024-12-06 23:48:50.232452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:38.859 [2024-12-06 23:48:50.237537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:38.859 [2024-12-06 23:48:50.237593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:38.859 [2024-12-06 23:48:50.237925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.859 NewBaseBdev 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.859 [ 00:14:38.859 { 00:14:38.859 "name": "NewBaseBdev", 00:14:38.859 "aliases": [ 00:14:38.859 "575144dc-ff9f-402e-a3f9-742bf1675ea7" 00:14:38.859 ], 00:14:38.859 "product_name": "Malloc disk", 00:14:38.859 "block_size": 512, 00:14:38.859 "num_blocks": 65536, 00:14:38.859 "uuid": "575144dc-ff9f-402e-a3f9-742bf1675ea7", 00:14:38.859 "assigned_rate_limits": { 00:14:38.859 "rw_ios_per_sec": 0, 00:14:38.859 "rw_mbytes_per_sec": 0, 00:14:38.859 "r_mbytes_per_sec": 0, 00:14:38.859 "w_mbytes_per_sec": 0 00:14:38.859 }, 00:14:38.859 "claimed": true, 00:14:38.859 "claim_type": "exclusive_write", 00:14:38.859 "zoned": false, 00:14:38.859 "supported_io_types": { 00:14:38.859 "read": true, 00:14:38.859 "write": true, 00:14:38.859 "unmap": true, 00:14:38.859 "flush": true, 00:14:38.859 "reset": true, 00:14:38.859 "nvme_admin": false, 00:14:38.859 "nvme_io": false, 00:14:38.859 "nvme_io_md": false, 00:14:38.859 "write_zeroes": true, 00:14:38.859 "zcopy": true, 00:14:38.859 "get_zone_info": false, 00:14:38.859 "zone_management": false, 00:14:38.859 "zone_append": false, 00:14:38.859 "compare": false, 00:14:38.859 "compare_and_write": false, 00:14:38.859 "abort": true, 00:14:38.859 "seek_hole": false, 00:14:38.859 "seek_data": false, 00:14:38.859 "copy": true, 00:14:38.859 "nvme_iov_md": false 00:14:38.859 }, 00:14:38.859 "memory_domains": [ 00:14:38.859 { 00:14:38.859 "dma_device_id": "system", 00:14:38.859 "dma_device_type": 1 00:14:38.859 }, 00:14:38.859 { 00:14:38.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.859 "dma_device_type": 2 00:14:38.859 } 00:14:38.859 ], 00:14:38.859 "driver_specific": {} 00:14:38.859 } 00:14:38.859 ] 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.859 "name": "Existed_Raid", 00:14:38.859 "uuid": "a7aee241-61c1-40dd-a799-13980055d87f", 00:14:38.859 "strip_size_kb": 64, 00:14:38.859 "state": "online", 00:14:38.859 "raid_level": "raid5f", 00:14:38.859 "superblock": false, 00:14:38.859 "num_base_bdevs": 3, 00:14:38.859 "num_base_bdevs_discovered": 3, 00:14:38.859 "num_base_bdevs_operational": 3, 00:14:38.859 "base_bdevs_list": [ 00:14:38.859 { 00:14:38.859 "name": "NewBaseBdev", 00:14:38.859 "uuid": "575144dc-ff9f-402e-a3f9-742bf1675ea7", 00:14:38.859 "is_configured": true, 00:14:38.859 "data_offset": 0, 00:14:38.859 "data_size": 65536 00:14:38.859 }, 00:14:38.859 { 00:14:38.859 "name": "BaseBdev2", 00:14:38.859 "uuid": "c88c4746-e093-45fb-8ed7-125fe4362b56", 00:14:38.859 "is_configured": true, 00:14:38.859 "data_offset": 0, 00:14:38.859 "data_size": 65536 00:14:38.859 }, 00:14:38.859 { 00:14:38.859 "name": "BaseBdev3", 00:14:38.859 "uuid": "01a8972f-4fdd-40f6-ba40-4b7495e1377a", 00:14:38.859 "is_configured": true, 00:14:38.859 "data_offset": 0, 00:14:38.859 "data_size": 65536 00:14:38.859 } 00:14:38.859 ] 00:14:38.859 }' 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.859 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.429 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:39.429 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:39.429 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.429 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.429 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.429 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.429 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:39.429 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.429 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.429 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.430 [2024-12-06 23:48:50.759598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.430 "name": "Existed_Raid", 00:14:39.430 "aliases": [ 00:14:39.430 "a7aee241-61c1-40dd-a799-13980055d87f" 00:14:39.430 ], 00:14:39.430 "product_name": "Raid Volume", 00:14:39.430 "block_size": 512, 00:14:39.430 "num_blocks": 131072, 00:14:39.430 "uuid": "a7aee241-61c1-40dd-a799-13980055d87f", 00:14:39.430 "assigned_rate_limits": { 00:14:39.430 "rw_ios_per_sec": 0, 00:14:39.430 "rw_mbytes_per_sec": 0, 00:14:39.430 "r_mbytes_per_sec": 0, 00:14:39.430 "w_mbytes_per_sec": 0 00:14:39.430 }, 00:14:39.430 "claimed": false, 00:14:39.430 "zoned": false, 00:14:39.430 "supported_io_types": { 00:14:39.430 "read": true, 00:14:39.430 "write": true, 00:14:39.430 "unmap": false, 00:14:39.430 "flush": false, 00:14:39.430 "reset": true, 00:14:39.430 "nvme_admin": false, 00:14:39.430 "nvme_io": false, 00:14:39.430 "nvme_io_md": false, 00:14:39.430 "write_zeroes": true, 00:14:39.430 "zcopy": false, 00:14:39.430 "get_zone_info": false, 00:14:39.430 "zone_management": false, 00:14:39.430 "zone_append": false, 00:14:39.430 "compare": false, 00:14:39.430 "compare_and_write": false, 00:14:39.430 "abort": false, 00:14:39.430 "seek_hole": false, 00:14:39.430 "seek_data": false, 00:14:39.430 "copy": false, 00:14:39.430 "nvme_iov_md": false 00:14:39.430 }, 00:14:39.430 "driver_specific": { 00:14:39.430 "raid": { 00:14:39.430 "uuid": "a7aee241-61c1-40dd-a799-13980055d87f", 00:14:39.430 "strip_size_kb": 64, 00:14:39.430 "state": "online", 00:14:39.430 "raid_level": "raid5f", 00:14:39.430 "superblock": false, 00:14:39.430 "num_base_bdevs": 3, 00:14:39.430 "num_base_bdevs_discovered": 3, 00:14:39.430 "num_base_bdevs_operational": 3, 00:14:39.430 "base_bdevs_list": [ 00:14:39.430 { 00:14:39.430 "name": "NewBaseBdev", 00:14:39.430 "uuid": "575144dc-ff9f-402e-a3f9-742bf1675ea7", 00:14:39.430 "is_configured": true, 00:14:39.430 "data_offset": 0, 00:14:39.430 "data_size": 65536 00:14:39.430 }, 00:14:39.430 { 00:14:39.430 "name": "BaseBdev2", 00:14:39.430 "uuid": "c88c4746-e093-45fb-8ed7-125fe4362b56", 00:14:39.430 "is_configured": true, 00:14:39.430 "data_offset": 0, 00:14:39.430 "data_size": 65536 00:14:39.430 }, 00:14:39.430 { 00:14:39.430 "name": "BaseBdev3", 00:14:39.430 "uuid": "01a8972f-4fdd-40f6-ba40-4b7495e1377a", 00:14:39.430 "is_configured": true, 00:14:39.430 "data_offset": 0, 00:14:39.430 "data_size": 65536 00:14:39.430 } 00:14:39.430 ] 00:14:39.430 } 00:14:39.430 } 00:14:39.430 }' 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:39.430 BaseBdev2 00:14:39.430 BaseBdev3' 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.430 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.690 23:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.690 [2024-12-06 23:48:51.030925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.690 [2024-12-06 23:48:51.030989] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.690 [2024-12-06 23:48:51.031093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.690 [2024-12-06 23:48:51.031379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.690 [2024-12-06 23:48:51.031434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79804 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79804 ']' 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79804 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79804 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79804' 00:14:39.690 killing process with pid 79804 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79804 00:14:39.690 [2024-12-06 23:48:51.080304] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.690 23:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79804 00:14:39.950 [2024-12-06 23:48:51.359218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.889 23:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:40.889 ************************************ 00:14:40.889 END TEST raid5f_state_function_test 00:14:40.889 ************************************ 00:14:40.889 00:14:40.889 real 0m10.684s 00:14:40.889 user 0m17.036s 00:14:40.889 sys 0m1.994s 00:14:40.889 23:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.889 23:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.149 23:48:52 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:41.149 23:48:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:41.149 23:48:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.149 23:48:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:41.149 ************************************ 00:14:41.149 START TEST raid5f_state_function_test_sb 00:14:41.149 ************************************ 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80422 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80422' 00:14:41.149 Process raid pid: 80422 00:14:41.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80422 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80422 ']' 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.149 23:48:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.149 [2024-12-06 23:48:52.607973] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:14:41.149 [2024-12-06 23:48:52.608159] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.410 [2024-12-06 23:48:52.786948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.410 [2024-12-06 23:48:52.896414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.670 [2024-12-06 23:48:53.080910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.670 [2024-12-06 23:48:53.081022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.930 [2024-12-06 23:48:53.407098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.930 [2024-12-06 23:48:53.407217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.930 [2024-12-06 23:48:53.407254] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.930 [2024-12-06 23:48:53.407277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.930 [2024-12-06 23:48:53.407294] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:41.930 [2024-12-06 23:48:53.407314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.930 "name": "Existed_Raid", 00:14:41.930 "uuid": "52d44951-545c-4af8-b891-aa0e367417c3", 00:14:41.930 "strip_size_kb": 64, 00:14:41.930 "state": "configuring", 00:14:41.930 "raid_level": "raid5f", 00:14:41.930 "superblock": true, 00:14:41.930 "num_base_bdevs": 3, 00:14:41.930 "num_base_bdevs_discovered": 0, 00:14:41.930 "num_base_bdevs_operational": 3, 00:14:41.930 "base_bdevs_list": [ 00:14:41.930 { 00:14:41.930 "name": "BaseBdev1", 00:14:41.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.930 "is_configured": false, 00:14:41.930 "data_offset": 0, 00:14:41.930 "data_size": 0 00:14:41.930 }, 00:14:41.930 { 00:14:41.930 "name": "BaseBdev2", 00:14:41.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.930 "is_configured": false, 00:14:41.930 "data_offset": 0, 00:14:41.930 "data_size": 0 00:14:41.930 }, 00:14:41.930 { 00:14:41.930 "name": "BaseBdev3", 00:14:41.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.930 "is_configured": false, 00:14:41.930 "data_offset": 0, 00:14:41.930 "data_size": 0 00:14:41.930 } 00:14:41.930 ] 00:14:41.930 }' 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.930 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.500 [2024-12-06 23:48:53.826296] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.500 [2024-12-06 23:48:53.826369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.500 [2024-12-06 23:48:53.838289] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.500 [2024-12-06 23:48:53.838362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.500 [2024-12-06 23:48:53.838403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.500 [2024-12-06 23:48:53.838425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.500 [2024-12-06 23:48:53.838442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.500 [2024-12-06 23:48:53.838462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.500 [2024-12-06 23:48:53.885208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.500 BaseBdev1 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.500 [ 00:14:42.500 { 00:14:42.500 "name": "BaseBdev1", 00:14:42.500 "aliases": [ 00:14:42.500 "9830b430-e009-4500-aae9-589b962e6cf2" 00:14:42.500 ], 00:14:42.500 "product_name": "Malloc disk", 00:14:42.500 "block_size": 512, 00:14:42.500 "num_blocks": 65536, 00:14:42.500 "uuid": "9830b430-e009-4500-aae9-589b962e6cf2", 00:14:42.500 "assigned_rate_limits": { 00:14:42.500 "rw_ios_per_sec": 0, 00:14:42.500 "rw_mbytes_per_sec": 0, 00:14:42.500 "r_mbytes_per_sec": 0, 00:14:42.500 "w_mbytes_per_sec": 0 00:14:42.500 }, 00:14:42.500 "claimed": true, 00:14:42.500 "claim_type": "exclusive_write", 00:14:42.500 "zoned": false, 00:14:42.500 "supported_io_types": { 00:14:42.500 "read": true, 00:14:42.500 "write": true, 00:14:42.500 "unmap": true, 00:14:42.500 "flush": true, 00:14:42.500 "reset": true, 00:14:42.500 "nvme_admin": false, 00:14:42.500 "nvme_io": false, 00:14:42.500 "nvme_io_md": false, 00:14:42.500 "write_zeroes": true, 00:14:42.500 "zcopy": true, 00:14:42.500 "get_zone_info": false, 00:14:42.500 "zone_management": false, 00:14:42.500 "zone_append": false, 00:14:42.500 "compare": false, 00:14:42.500 "compare_and_write": false, 00:14:42.500 "abort": true, 00:14:42.500 "seek_hole": false, 00:14:42.500 "seek_data": false, 00:14:42.500 "copy": true, 00:14:42.500 "nvme_iov_md": false 00:14:42.500 }, 00:14:42.500 "memory_domains": [ 00:14:42.500 { 00:14:42.500 "dma_device_id": "system", 00:14:42.500 "dma_device_type": 1 00:14:42.500 }, 00:14:42.500 { 00:14:42.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.500 "dma_device_type": 2 00:14:42.500 } 00:14:42.500 ], 00:14:42.500 "driver_specific": {} 00:14:42.500 } 00:14:42.500 ] 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.500 "name": "Existed_Raid", 00:14:42.500 "uuid": "84bebedb-b3ef-4bab-a74b-e60b7a382c7b", 00:14:42.500 "strip_size_kb": 64, 00:14:42.500 "state": "configuring", 00:14:42.500 "raid_level": "raid5f", 00:14:42.500 "superblock": true, 00:14:42.500 "num_base_bdevs": 3, 00:14:42.500 "num_base_bdevs_discovered": 1, 00:14:42.500 "num_base_bdevs_operational": 3, 00:14:42.500 "base_bdevs_list": [ 00:14:42.500 { 00:14:42.500 "name": "BaseBdev1", 00:14:42.500 "uuid": "9830b430-e009-4500-aae9-589b962e6cf2", 00:14:42.500 "is_configured": true, 00:14:42.500 "data_offset": 2048, 00:14:42.500 "data_size": 63488 00:14:42.500 }, 00:14:42.500 { 00:14:42.500 "name": "BaseBdev2", 00:14:42.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.500 "is_configured": false, 00:14:42.500 "data_offset": 0, 00:14:42.500 "data_size": 0 00:14:42.500 }, 00:14:42.500 { 00:14:42.500 "name": "BaseBdev3", 00:14:42.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.500 "is_configured": false, 00:14:42.500 "data_offset": 0, 00:14:42.500 "data_size": 0 00:14:42.500 } 00:14:42.500 ] 00:14:42.500 }' 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.500 23:48:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.072 [2024-12-06 23:48:54.340646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:43.072 [2024-12-06 23:48:54.340738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.072 [2024-12-06 23:48:54.352664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.072 [2024-12-06 23:48:54.354381] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.072 [2024-12-06 23:48:54.354426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.072 [2024-12-06 23:48:54.354436] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:43.072 [2024-12-06 23:48:54.354445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.072 "name": "Existed_Raid", 00:14:43.072 "uuid": "4547e253-fd73-46f3-ab4f-d30d22640c88", 00:14:43.072 "strip_size_kb": 64, 00:14:43.072 "state": "configuring", 00:14:43.072 "raid_level": "raid5f", 00:14:43.072 "superblock": true, 00:14:43.072 "num_base_bdevs": 3, 00:14:43.072 "num_base_bdevs_discovered": 1, 00:14:43.072 "num_base_bdevs_operational": 3, 00:14:43.072 "base_bdevs_list": [ 00:14:43.072 { 00:14:43.072 "name": "BaseBdev1", 00:14:43.072 "uuid": "9830b430-e009-4500-aae9-589b962e6cf2", 00:14:43.072 "is_configured": true, 00:14:43.072 "data_offset": 2048, 00:14:43.072 "data_size": 63488 00:14:43.072 }, 00:14:43.072 { 00:14:43.072 "name": "BaseBdev2", 00:14:43.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.072 "is_configured": false, 00:14:43.072 "data_offset": 0, 00:14:43.072 "data_size": 0 00:14:43.072 }, 00:14:43.072 { 00:14:43.072 "name": "BaseBdev3", 00:14:43.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.072 "is_configured": false, 00:14:43.072 "data_offset": 0, 00:14:43.072 "data_size": 0 00:14:43.072 } 00:14:43.072 ] 00:14:43.072 }' 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.072 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.332 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:43.332 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.332 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.332 [2024-12-06 23:48:54.858592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.332 BaseBdev2 00:14:43.332 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.332 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:43.332 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:43.332 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.332 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:43.332 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.333 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.333 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.333 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.333 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.333 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.333 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.333 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.333 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.333 [ 00:14:43.333 { 00:14:43.333 "name": "BaseBdev2", 00:14:43.333 "aliases": [ 00:14:43.333 "8ea79a65-d0b3-4bb8-bb34-cf63678abd45" 00:14:43.333 ], 00:14:43.333 "product_name": "Malloc disk", 00:14:43.333 "block_size": 512, 00:14:43.333 "num_blocks": 65536, 00:14:43.333 "uuid": "8ea79a65-d0b3-4bb8-bb34-cf63678abd45", 00:14:43.333 "assigned_rate_limits": { 00:14:43.333 "rw_ios_per_sec": 0, 00:14:43.333 "rw_mbytes_per_sec": 0, 00:14:43.333 "r_mbytes_per_sec": 0, 00:14:43.333 "w_mbytes_per_sec": 0 00:14:43.333 }, 00:14:43.333 "claimed": true, 00:14:43.333 "claim_type": "exclusive_write", 00:14:43.333 "zoned": false, 00:14:43.593 "supported_io_types": { 00:14:43.593 "read": true, 00:14:43.593 "write": true, 00:14:43.593 "unmap": true, 00:14:43.593 "flush": true, 00:14:43.593 "reset": true, 00:14:43.593 "nvme_admin": false, 00:14:43.593 "nvme_io": false, 00:14:43.593 "nvme_io_md": false, 00:14:43.593 "write_zeroes": true, 00:14:43.593 "zcopy": true, 00:14:43.593 "get_zone_info": false, 00:14:43.593 "zone_management": false, 00:14:43.593 "zone_append": false, 00:14:43.593 "compare": false, 00:14:43.593 "compare_and_write": false, 00:14:43.593 "abort": true, 00:14:43.593 "seek_hole": false, 00:14:43.593 "seek_data": false, 00:14:43.593 "copy": true, 00:14:43.593 "nvme_iov_md": false 00:14:43.593 }, 00:14:43.593 "memory_domains": [ 00:14:43.593 { 00:14:43.593 "dma_device_id": "system", 00:14:43.593 "dma_device_type": 1 00:14:43.593 }, 00:14:43.593 { 00:14:43.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.593 "dma_device_type": 2 00:14:43.593 } 00:14:43.593 ], 00:14:43.593 "driver_specific": {} 00:14:43.593 } 00:14:43.593 ] 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.593 "name": "Existed_Raid", 00:14:43.593 "uuid": "4547e253-fd73-46f3-ab4f-d30d22640c88", 00:14:43.593 "strip_size_kb": 64, 00:14:43.593 "state": "configuring", 00:14:43.593 "raid_level": "raid5f", 00:14:43.593 "superblock": true, 00:14:43.593 "num_base_bdevs": 3, 00:14:43.593 "num_base_bdevs_discovered": 2, 00:14:43.593 "num_base_bdevs_operational": 3, 00:14:43.593 "base_bdevs_list": [ 00:14:43.593 { 00:14:43.593 "name": "BaseBdev1", 00:14:43.593 "uuid": "9830b430-e009-4500-aae9-589b962e6cf2", 00:14:43.593 "is_configured": true, 00:14:43.593 "data_offset": 2048, 00:14:43.593 "data_size": 63488 00:14:43.593 }, 00:14:43.593 { 00:14:43.593 "name": "BaseBdev2", 00:14:43.593 "uuid": "8ea79a65-d0b3-4bb8-bb34-cf63678abd45", 00:14:43.593 "is_configured": true, 00:14:43.593 "data_offset": 2048, 00:14:43.593 "data_size": 63488 00:14:43.593 }, 00:14:43.593 { 00:14:43.593 "name": "BaseBdev3", 00:14:43.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.593 "is_configured": false, 00:14:43.593 "data_offset": 0, 00:14:43.593 "data_size": 0 00:14:43.593 } 00:14:43.593 ] 00:14:43.593 }' 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.593 23:48:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.853 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:43.853 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.853 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.112 [2024-12-06 23:48:55.415142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.112 [2024-12-06 23:48:55.415416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:44.112 [2024-12-06 23:48:55.415435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:44.112 [2024-12-06 23:48:55.415716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:44.112 BaseBdev3 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.112 [2024-12-06 23:48:55.420990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:44.112 [2024-12-06 23:48:55.421071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:44.112 [2024-12-06 23:48:55.421328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.112 [ 00:14:44.112 { 00:14:44.112 "name": "BaseBdev3", 00:14:44.112 "aliases": [ 00:14:44.112 "5d77d848-5cff-4488-a00d-4f5185480e4c" 00:14:44.112 ], 00:14:44.112 "product_name": "Malloc disk", 00:14:44.112 "block_size": 512, 00:14:44.112 "num_blocks": 65536, 00:14:44.112 "uuid": "5d77d848-5cff-4488-a00d-4f5185480e4c", 00:14:44.112 "assigned_rate_limits": { 00:14:44.112 "rw_ios_per_sec": 0, 00:14:44.112 "rw_mbytes_per_sec": 0, 00:14:44.112 "r_mbytes_per_sec": 0, 00:14:44.112 "w_mbytes_per_sec": 0 00:14:44.112 }, 00:14:44.112 "claimed": true, 00:14:44.112 "claim_type": "exclusive_write", 00:14:44.112 "zoned": false, 00:14:44.112 "supported_io_types": { 00:14:44.112 "read": true, 00:14:44.112 "write": true, 00:14:44.112 "unmap": true, 00:14:44.112 "flush": true, 00:14:44.112 "reset": true, 00:14:44.112 "nvme_admin": false, 00:14:44.112 "nvme_io": false, 00:14:44.112 "nvme_io_md": false, 00:14:44.112 "write_zeroes": true, 00:14:44.112 "zcopy": true, 00:14:44.112 "get_zone_info": false, 00:14:44.112 "zone_management": false, 00:14:44.112 "zone_append": false, 00:14:44.112 "compare": false, 00:14:44.112 "compare_and_write": false, 00:14:44.112 "abort": true, 00:14:44.112 "seek_hole": false, 00:14:44.112 "seek_data": false, 00:14:44.112 "copy": true, 00:14:44.112 "nvme_iov_md": false 00:14:44.112 }, 00:14:44.112 "memory_domains": [ 00:14:44.112 { 00:14:44.112 "dma_device_id": "system", 00:14:44.112 "dma_device_type": 1 00:14:44.112 }, 00:14:44.112 { 00:14:44.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.112 "dma_device_type": 2 00:14:44.112 } 00:14:44.112 ], 00:14:44.112 "driver_specific": {} 00:14:44.112 } 00:14:44.112 ] 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.112 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.112 "name": "Existed_Raid", 00:14:44.112 "uuid": "4547e253-fd73-46f3-ab4f-d30d22640c88", 00:14:44.112 "strip_size_kb": 64, 00:14:44.112 "state": "online", 00:14:44.112 "raid_level": "raid5f", 00:14:44.112 "superblock": true, 00:14:44.113 "num_base_bdevs": 3, 00:14:44.113 "num_base_bdevs_discovered": 3, 00:14:44.113 "num_base_bdevs_operational": 3, 00:14:44.113 "base_bdevs_list": [ 00:14:44.113 { 00:14:44.113 "name": "BaseBdev1", 00:14:44.113 "uuid": "9830b430-e009-4500-aae9-589b962e6cf2", 00:14:44.113 "is_configured": true, 00:14:44.113 "data_offset": 2048, 00:14:44.113 "data_size": 63488 00:14:44.113 }, 00:14:44.113 { 00:14:44.113 "name": "BaseBdev2", 00:14:44.113 "uuid": "8ea79a65-d0b3-4bb8-bb34-cf63678abd45", 00:14:44.113 "is_configured": true, 00:14:44.113 "data_offset": 2048, 00:14:44.113 "data_size": 63488 00:14:44.113 }, 00:14:44.113 { 00:14:44.113 "name": "BaseBdev3", 00:14:44.113 "uuid": "5d77d848-5cff-4488-a00d-4f5185480e4c", 00:14:44.113 "is_configured": true, 00:14:44.113 "data_offset": 2048, 00:14:44.113 "data_size": 63488 00:14:44.113 } 00:14:44.113 ] 00:14:44.113 }' 00:14:44.113 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.113 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.372 [2024-12-06 23:48:55.886634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:44.372 "name": "Existed_Raid", 00:14:44.372 "aliases": [ 00:14:44.372 "4547e253-fd73-46f3-ab4f-d30d22640c88" 00:14:44.372 ], 00:14:44.372 "product_name": "Raid Volume", 00:14:44.372 "block_size": 512, 00:14:44.372 "num_blocks": 126976, 00:14:44.372 "uuid": "4547e253-fd73-46f3-ab4f-d30d22640c88", 00:14:44.372 "assigned_rate_limits": { 00:14:44.372 "rw_ios_per_sec": 0, 00:14:44.372 "rw_mbytes_per_sec": 0, 00:14:44.372 "r_mbytes_per_sec": 0, 00:14:44.372 "w_mbytes_per_sec": 0 00:14:44.372 }, 00:14:44.372 "claimed": false, 00:14:44.372 "zoned": false, 00:14:44.372 "supported_io_types": { 00:14:44.372 "read": true, 00:14:44.372 "write": true, 00:14:44.372 "unmap": false, 00:14:44.372 "flush": false, 00:14:44.372 "reset": true, 00:14:44.372 "nvme_admin": false, 00:14:44.372 "nvme_io": false, 00:14:44.372 "nvme_io_md": false, 00:14:44.372 "write_zeroes": true, 00:14:44.372 "zcopy": false, 00:14:44.372 "get_zone_info": false, 00:14:44.372 "zone_management": false, 00:14:44.372 "zone_append": false, 00:14:44.372 "compare": false, 00:14:44.372 "compare_and_write": false, 00:14:44.372 "abort": false, 00:14:44.372 "seek_hole": false, 00:14:44.372 "seek_data": false, 00:14:44.372 "copy": false, 00:14:44.372 "nvme_iov_md": false 00:14:44.372 }, 00:14:44.372 "driver_specific": { 00:14:44.372 "raid": { 00:14:44.372 "uuid": "4547e253-fd73-46f3-ab4f-d30d22640c88", 00:14:44.372 "strip_size_kb": 64, 00:14:44.372 "state": "online", 00:14:44.372 "raid_level": "raid5f", 00:14:44.372 "superblock": true, 00:14:44.372 "num_base_bdevs": 3, 00:14:44.372 "num_base_bdevs_discovered": 3, 00:14:44.372 "num_base_bdevs_operational": 3, 00:14:44.372 "base_bdevs_list": [ 00:14:44.372 { 00:14:44.372 "name": "BaseBdev1", 00:14:44.372 "uuid": "9830b430-e009-4500-aae9-589b962e6cf2", 00:14:44.372 "is_configured": true, 00:14:44.372 "data_offset": 2048, 00:14:44.372 "data_size": 63488 00:14:44.372 }, 00:14:44.372 { 00:14:44.372 "name": "BaseBdev2", 00:14:44.372 "uuid": "8ea79a65-d0b3-4bb8-bb34-cf63678abd45", 00:14:44.372 "is_configured": true, 00:14:44.372 "data_offset": 2048, 00:14:44.372 "data_size": 63488 00:14:44.372 }, 00:14:44.372 { 00:14:44.372 "name": "BaseBdev3", 00:14:44.372 "uuid": "5d77d848-5cff-4488-a00d-4f5185480e4c", 00:14:44.372 "is_configured": true, 00:14:44.372 "data_offset": 2048, 00:14:44.372 "data_size": 63488 00:14:44.372 } 00:14:44.372 ] 00:14:44.372 } 00:14:44.372 } 00:14:44.372 }' 00:14:44.372 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.632 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:44.632 BaseBdev2 00:14:44.632 BaseBdev3' 00:14:44.632 23:48:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.632 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.632 [2024-12-06 23:48:56.158028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.914 "name": "Existed_Raid", 00:14:44.914 "uuid": "4547e253-fd73-46f3-ab4f-d30d22640c88", 00:14:44.914 "strip_size_kb": 64, 00:14:44.914 "state": "online", 00:14:44.914 "raid_level": "raid5f", 00:14:44.914 "superblock": true, 00:14:44.914 "num_base_bdevs": 3, 00:14:44.914 "num_base_bdevs_discovered": 2, 00:14:44.914 "num_base_bdevs_operational": 2, 00:14:44.914 "base_bdevs_list": [ 00:14:44.914 { 00:14:44.914 "name": null, 00:14:44.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.914 "is_configured": false, 00:14:44.914 "data_offset": 0, 00:14:44.914 "data_size": 63488 00:14:44.914 }, 00:14:44.914 { 00:14:44.914 "name": "BaseBdev2", 00:14:44.914 "uuid": "8ea79a65-d0b3-4bb8-bb34-cf63678abd45", 00:14:44.914 "is_configured": true, 00:14:44.914 "data_offset": 2048, 00:14:44.914 "data_size": 63488 00:14:44.914 }, 00:14:44.914 { 00:14:44.914 "name": "BaseBdev3", 00:14:44.914 "uuid": "5d77d848-5cff-4488-a00d-4f5185480e4c", 00:14:44.914 "is_configured": true, 00:14:44.914 "data_offset": 2048, 00:14:44.914 "data_size": 63488 00:14:44.914 } 00:14:44.914 ] 00:14:44.914 }' 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.914 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.174 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:45.174 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:45.174 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.174 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:45.174 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.174 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.174 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.435 [2024-12-06 23:48:56.748254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:45.435 [2024-12-06 23:48:56.748402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.435 [2024-12-06 23:48:56.837088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.435 [2024-12-06 23:48:56.893024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:45.435 [2024-12-06 23:48:56.893066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.435 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.695 23:48:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.695 BaseBdev2 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:45.695 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.696 [ 00:14:45.696 { 00:14:45.696 "name": "BaseBdev2", 00:14:45.696 "aliases": [ 00:14:45.696 "d4ce9265-f3d1-40c0-8d22-8010c9557dba" 00:14:45.696 ], 00:14:45.696 "product_name": "Malloc disk", 00:14:45.696 "block_size": 512, 00:14:45.696 "num_blocks": 65536, 00:14:45.696 "uuid": "d4ce9265-f3d1-40c0-8d22-8010c9557dba", 00:14:45.696 "assigned_rate_limits": { 00:14:45.696 "rw_ios_per_sec": 0, 00:14:45.696 "rw_mbytes_per_sec": 0, 00:14:45.696 "r_mbytes_per_sec": 0, 00:14:45.696 "w_mbytes_per_sec": 0 00:14:45.696 }, 00:14:45.696 "claimed": false, 00:14:45.696 "zoned": false, 00:14:45.696 "supported_io_types": { 00:14:45.696 "read": true, 00:14:45.696 "write": true, 00:14:45.696 "unmap": true, 00:14:45.696 "flush": true, 00:14:45.696 "reset": true, 00:14:45.696 "nvme_admin": false, 00:14:45.696 "nvme_io": false, 00:14:45.696 "nvme_io_md": false, 00:14:45.696 "write_zeroes": true, 00:14:45.696 "zcopy": true, 00:14:45.696 "get_zone_info": false, 00:14:45.696 "zone_management": false, 00:14:45.696 "zone_append": false, 00:14:45.696 "compare": false, 00:14:45.696 "compare_and_write": false, 00:14:45.696 "abort": true, 00:14:45.696 "seek_hole": false, 00:14:45.696 "seek_data": false, 00:14:45.696 "copy": true, 00:14:45.696 "nvme_iov_md": false 00:14:45.696 }, 00:14:45.696 "memory_domains": [ 00:14:45.696 { 00:14:45.696 "dma_device_id": "system", 00:14:45.696 "dma_device_type": 1 00:14:45.696 }, 00:14:45.696 { 00:14:45.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.696 "dma_device_type": 2 00:14:45.696 } 00:14:45.696 ], 00:14:45.696 "driver_specific": {} 00:14:45.696 } 00:14:45.696 ] 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.696 BaseBdev3 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.696 [ 00:14:45.696 { 00:14:45.696 "name": "BaseBdev3", 00:14:45.696 "aliases": [ 00:14:45.696 "18c64716-69f1-47e4-a8fc-99d6758b358b" 00:14:45.696 ], 00:14:45.696 "product_name": "Malloc disk", 00:14:45.696 "block_size": 512, 00:14:45.696 "num_blocks": 65536, 00:14:45.696 "uuid": "18c64716-69f1-47e4-a8fc-99d6758b358b", 00:14:45.696 "assigned_rate_limits": { 00:14:45.696 "rw_ios_per_sec": 0, 00:14:45.696 "rw_mbytes_per_sec": 0, 00:14:45.696 "r_mbytes_per_sec": 0, 00:14:45.696 "w_mbytes_per_sec": 0 00:14:45.696 }, 00:14:45.696 "claimed": false, 00:14:45.696 "zoned": false, 00:14:45.696 "supported_io_types": { 00:14:45.696 "read": true, 00:14:45.696 "write": true, 00:14:45.696 "unmap": true, 00:14:45.696 "flush": true, 00:14:45.696 "reset": true, 00:14:45.696 "nvme_admin": false, 00:14:45.696 "nvme_io": false, 00:14:45.696 "nvme_io_md": false, 00:14:45.696 "write_zeroes": true, 00:14:45.696 "zcopy": true, 00:14:45.696 "get_zone_info": false, 00:14:45.696 "zone_management": false, 00:14:45.696 "zone_append": false, 00:14:45.696 "compare": false, 00:14:45.696 "compare_and_write": false, 00:14:45.696 "abort": true, 00:14:45.696 "seek_hole": false, 00:14:45.696 "seek_data": false, 00:14:45.696 "copy": true, 00:14:45.696 "nvme_iov_md": false 00:14:45.696 }, 00:14:45.696 "memory_domains": [ 00:14:45.696 { 00:14:45.696 "dma_device_id": "system", 00:14:45.696 "dma_device_type": 1 00:14:45.696 }, 00:14:45.696 { 00:14:45.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.696 "dma_device_type": 2 00:14:45.696 } 00:14:45.696 ], 00:14:45.696 "driver_specific": {} 00:14:45.696 } 00:14:45.696 ] 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.696 [2024-12-06 23:48:57.196127] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.696 [2024-12-06 23:48:57.196223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.696 [2024-12-06 23:48:57.196269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.696 [2024-12-06 23:48:57.198014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.696 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.697 "name": "Existed_Raid", 00:14:45.697 "uuid": "f41bdbbb-4f1a-4586-8b8c-feaa9628ec17", 00:14:45.697 "strip_size_kb": 64, 00:14:45.697 "state": "configuring", 00:14:45.697 "raid_level": "raid5f", 00:14:45.697 "superblock": true, 00:14:45.697 "num_base_bdevs": 3, 00:14:45.697 "num_base_bdevs_discovered": 2, 00:14:45.697 "num_base_bdevs_operational": 3, 00:14:45.697 "base_bdevs_list": [ 00:14:45.697 { 00:14:45.697 "name": "BaseBdev1", 00:14:45.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.697 "is_configured": false, 00:14:45.697 "data_offset": 0, 00:14:45.697 "data_size": 0 00:14:45.697 }, 00:14:45.697 { 00:14:45.697 "name": "BaseBdev2", 00:14:45.697 "uuid": "d4ce9265-f3d1-40c0-8d22-8010c9557dba", 00:14:45.697 "is_configured": true, 00:14:45.697 "data_offset": 2048, 00:14:45.697 "data_size": 63488 00:14:45.697 }, 00:14:45.697 { 00:14:45.697 "name": "BaseBdev3", 00:14:45.697 "uuid": "18c64716-69f1-47e4-a8fc-99d6758b358b", 00:14:45.697 "is_configured": true, 00:14:45.697 "data_offset": 2048, 00:14:45.697 "data_size": 63488 00:14:45.697 } 00:14:45.697 ] 00:14:45.697 }' 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.697 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.266 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:46.266 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.266 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.267 [2024-12-06 23:48:57.655540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.267 "name": "Existed_Raid", 00:14:46.267 "uuid": "f41bdbbb-4f1a-4586-8b8c-feaa9628ec17", 00:14:46.267 "strip_size_kb": 64, 00:14:46.267 "state": "configuring", 00:14:46.267 "raid_level": "raid5f", 00:14:46.267 "superblock": true, 00:14:46.267 "num_base_bdevs": 3, 00:14:46.267 "num_base_bdevs_discovered": 1, 00:14:46.267 "num_base_bdevs_operational": 3, 00:14:46.267 "base_bdevs_list": [ 00:14:46.267 { 00:14:46.267 "name": "BaseBdev1", 00:14:46.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.267 "is_configured": false, 00:14:46.267 "data_offset": 0, 00:14:46.267 "data_size": 0 00:14:46.267 }, 00:14:46.267 { 00:14:46.267 "name": null, 00:14:46.267 "uuid": "d4ce9265-f3d1-40c0-8d22-8010c9557dba", 00:14:46.267 "is_configured": false, 00:14:46.267 "data_offset": 0, 00:14:46.267 "data_size": 63488 00:14:46.267 }, 00:14:46.267 { 00:14:46.267 "name": "BaseBdev3", 00:14:46.267 "uuid": "18c64716-69f1-47e4-a8fc-99d6758b358b", 00:14:46.267 "is_configured": true, 00:14:46.267 "data_offset": 2048, 00:14:46.267 "data_size": 63488 00:14:46.267 } 00:14:46.267 ] 00:14:46.267 }' 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.267 23:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.527 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.527 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:46.527 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.527 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.787 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.787 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:46.787 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:46.787 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.787 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.787 [2024-12-06 23:48:58.158041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.787 BaseBdev1 00:14:46.787 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.787 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:46.787 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:46.787 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.787 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:46.787 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.787 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.788 [ 00:14:46.788 { 00:14:46.788 "name": "BaseBdev1", 00:14:46.788 "aliases": [ 00:14:46.788 "81519ce4-b9e5-4481-9307-5cd7bfafdb40" 00:14:46.788 ], 00:14:46.788 "product_name": "Malloc disk", 00:14:46.788 "block_size": 512, 00:14:46.788 "num_blocks": 65536, 00:14:46.788 "uuid": "81519ce4-b9e5-4481-9307-5cd7bfafdb40", 00:14:46.788 "assigned_rate_limits": { 00:14:46.788 "rw_ios_per_sec": 0, 00:14:46.788 "rw_mbytes_per_sec": 0, 00:14:46.788 "r_mbytes_per_sec": 0, 00:14:46.788 "w_mbytes_per_sec": 0 00:14:46.788 }, 00:14:46.788 "claimed": true, 00:14:46.788 "claim_type": "exclusive_write", 00:14:46.788 "zoned": false, 00:14:46.788 "supported_io_types": { 00:14:46.788 "read": true, 00:14:46.788 "write": true, 00:14:46.788 "unmap": true, 00:14:46.788 "flush": true, 00:14:46.788 "reset": true, 00:14:46.788 "nvme_admin": false, 00:14:46.788 "nvme_io": false, 00:14:46.788 "nvme_io_md": false, 00:14:46.788 "write_zeroes": true, 00:14:46.788 "zcopy": true, 00:14:46.788 "get_zone_info": false, 00:14:46.788 "zone_management": false, 00:14:46.788 "zone_append": false, 00:14:46.788 "compare": false, 00:14:46.788 "compare_and_write": false, 00:14:46.788 "abort": true, 00:14:46.788 "seek_hole": false, 00:14:46.788 "seek_data": false, 00:14:46.788 "copy": true, 00:14:46.788 "nvme_iov_md": false 00:14:46.788 }, 00:14:46.788 "memory_domains": [ 00:14:46.788 { 00:14:46.788 "dma_device_id": "system", 00:14:46.788 "dma_device_type": 1 00:14:46.788 }, 00:14:46.788 { 00:14:46.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.788 "dma_device_type": 2 00:14:46.788 } 00:14:46.788 ], 00:14:46.788 "driver_specific": {} 00:14:46.788 } 00:14:46.788 ] 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.788 "name": "Existed_Raid", 00:14:46.788 "uuid": "f41bdbbb-4f1a-4586-8b8c-feaa9628ec17", 00:14:46.788 "strip_size_kb": 64, 00:14:46.788 "state": "configuring", 00:14:46.788 "raid_level": "raid5f", 00:14:46.788 "superblock": true, 00:14:46.788 "num_base_bdevs": 3, 00:14:46.788 "num_base_bdevs_discovered": 2, 00:14:46.788 "num_base_bdevs_operational": 3, 00:14:46.788 "base_bdevs_list": [ 00:14:46.788 { 00:14:46.788 "name": "BaseBdev1", 00:14:46.788 "uuid": "81519ce4-b9e5-4481-9307-5cd7bfafdb40", 00:14:46.788 "is_configured": true, 00:14:46.788 "data_offset": 2048, 00:14:46.788 "data_size": 63488 00:14:46.788 }, 00:14:46.788 { 00:14:46.788 "name": null, 00:14:46.788 "uuid": "d4ce9265-f3d1-40c0-8d22-8010c9557dba", 00:14:46.788 "is_configured": false, 00:14:46.788 "data_offset": 0, 00:14:46.788 "data_size": 63488 00:14:46.788 }, 00:14:46.788 { 00:14:46.788 "name": "BaseBdev3", 00:14:46.788 "uuid": "18c64716-69f1-47e4-a8fc-99d6758b358b", 00:14:46.788 "is_configured": true, 00:14:46.788 "data_offset": 2048, 00:14:46.788 "data_size": 63488 00:14:46.788 } 00:14:46.788 ] 00:14:46.788 }' 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.788 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.357 [2024-12-06 23:48:58.645247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.357 "name": "Existed_Raid", 00:14:47.357 "uuid": "f41bdbbb-4f1a-4586-8b8c-feaa9628ec17", 00:14:47.357 "strip_size_kb": 64, 00:14:47.357 "state": "configuring", 00:14:47.357 "raid_level": "raid5f", 00:14:47.357 "superblock": true, 00:14:47.357 "num_base_bdevs": 3, 00:14:47.357 "num_base_bdevs_discovered": 1, 00:14:47.357 "num_base_bdevs_operational": 3, 00:14:47.357 "base_bdevs_list": [ 00:14:47.357 { 00:14:47.357 "name": "BaseBdev1", 00:14:47.357 "uuid": "81519ce4-b9e5-4481-9307-5cd7bfafdb40", 00:14:47.357 "is_configured": true, 00:14:47.357 "data_offset": 2048, 00:14:47.357 "data_size": 63488 00:14:47.357 }, 00:14:47.357 { 00:14:47.357 "name": null, 00:14:47.357 "uuid": "d4ce9265-f3d1-40c0-8d22-8010c9557dba", 00:14:47.357 "is_configured": false, 00:14:47.357 "data_offset": 0, 00:14:47.357 "data_size": 63488 00:14:47.357 }, 00:14:47.357 { 00:14:47.357 "name": null, 00:14:47.357 "uuid": "18c64716-69f1-47e4-a8fc-99d6758b358b", 00:14:47.357 "is_configured": false, 00:14:47.357 "data_offset": 0, 00:14:47.357 "data_size": 63488 00:14:47.357 } 00:14:47.357 ] 00:14:47.357 }' 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.357 23:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.617 [2024-12-06 23:48:59.136422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.617 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.876 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.876 "name": "Existed_Raid", 00:14:47.876 "uuid": "f41bdbbb-4f1a-4586-8b8c-feaa9628ec17", 00:14:47.876 "strip_size_kb": 64, 00:14:47.876 "state": "configuring", 00:14:47.876 "raid_level": "raid5f", 00:14:47.876 "superblock": true, 00:14:47.876 "num_base_bdevs": 3, 00:14:47.876 "num_base_bdevs_discovered": 2, 00:14:47.876 "num_base_bdevs_operational": 3, 00:14:47.876 "base_bdevs_list": [ 00:14:47.876 { 00:14:47.876 "name": "BaseBdev1", 00:14:47.876 "uuid": "81519ce4-b9e5-4481-9307-5cd7bfafdb40", 00:14:47.876 "is_configured": true, 00:14:47.876 "data_offset": 2048, 00:14:47.876 "data_size": 63488 00:14:47.876 }, 00:14:47.876 { 00:14:47.876 "name": null, 00:14:47.876 "uuid": "d4ce9265-f3d1-40c0-8d22-8010c9557dba", 00:14:47.876 "is_configured": false, 00:14:47.876 "data_offset": 0, 00:14:47.876 "data_size": 63488 00:14:47.876 }, 00:14:47.876 { 00:14:47.876 "name": "BaseBdev3", 00:14:47.876 "uuid": "18c64716-69f1-47e4-a8fc-99d6758b358b", 00:14:47.876 "is_configured": true, 00:14:47.876 "data_offset": 2048, 00:14:47.876 "data_size": 63488 00:14:47.876 } 00:14:47.876 ] 00:14:47.876 }' 00:14:47.876 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.876 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.135 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.135 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.135 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.135 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:48.135 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.135 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:48.135 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:48.135 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.135 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.135 [2024-12-06 23:48:59.655741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.394 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.394 "name": "Existed_Raid", 00:14:48.394 "uuid": "f41bdbbb-4f1a-4586-8b8c-feaa9628ec17", 00:14:48.394 "strip_size_kb": 64, 00:14:48.394 "state": "configuring", 00:14:48.394 "raid_level": "raid5f", 00:14:48.394 "superblock": true, 00:14:48.394 "num_base_bdevs": 3, 00:14:48.394 "num_base_bdevs_discovered": 1, 00:14:48.394 "num_base_bdevs_operational": 3, 00:14:48.394 "base_bdevs_list": [ 00:14:48.394 { 00:14:48.394 "name": null, 00:14:48.394 "uuid": "81519ce4-b9e5-4481-9307-5cd7bfafdb40", 00:14:48.394 "is_configured": false, 00:14:48.394 "data_offset": 0, 00:14:48.394 "data_size": 63488 00:14:48.394 }, 00:14:48.394 { 00:14:48.394 "name": null, 00:14:48.394 "uuid": "d4ce9265-f3d1-40c0-8d22-8010c9557dba", 00:14:48.394 "is_configured": false, 00:14:48.395 "data_offset": 0, 00:14:48.395 "data_size": 63488 00:14:48.395 }, 00:14:48.395 { 00:14:48.395 "name": "BaseBdev3", 00:14:48.395 "uuid": "18c64716-69f1-47e4-a8fc-99d6758b358b", 00:14:48.395 "is_configured": true, 00:14:48.395 "data_offset": 2048, 00:14:48.395 "data_size": 63488 00:14:48.395 } 00:14:48.395 ] 00:14:48.395 }' 00:14:48.395 23:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.395 23:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.654 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:48.654 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.654 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.654 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.654 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.654 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:48.654 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:48.654 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.654 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.913 [2024-12-06 23:49:00.217431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.913 "name": "Existed_Raid", 00:14:48.913 "uuid": "f41bdbbb-4f1a-4586-8b8c-feaa9628ec17", 00:14:48.913 "strip_size_kb": 64, 00:14:48.913 "state": "configuring", 00:14:48.913 "raid_level": "raid5f", 00:14:48.913 "superblock": true, 00:14:48.913 "num_base_bdevs": 3, 00:14:48.913 "num_base_bdevs_discovered": 2, 00:14:48.913 "num_base_bdevs_operational": 3, 00:14:48.913 "base_bdevs_list": [ 00:14:48.913 { 00:14:48.913 "name": null, 00:14:48.913 "uuid": "81519ce4-b9e5-4481-9307-5cd7bfafdb40", 00:14:48.913 "is_configured": false, 00:14:48.913 "data_offset": 0, 00:14:48.913 "data_size": 63488 00:14:48.913 }, 00:14:48.913 { 00:14:48.913 "name": "BaseBdev2", 00:14:48.913 "uuid": "d4ce9265-f3d1-40c0-8d22-8010c9557dba", 00:14:48.913 "is_configured": true, 00:14:48.913 "data_offset": 2048, 00:14:48.913 "data_size": 63488 00:14:48.913 }, 00:14:48.913 { 00:14:48.913 "name": "BaseBdev3", 00:14:48.913 "uuid": "18c64716-69f1-47e4-a8fc-99d6758b358b", 00:14:48.913 "is_configured": true, 00:14:48.913 "data_offset": 2048, 00:14:48.913 "data_size": 63488 00:14:48.913 } 00:14:48.913 ] 00:14:48.913 }' 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.913 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.173 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.173 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.173 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.173 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:49.173 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.173 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:49.173 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.173 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:49.173 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.173 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.173 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 81519ce4-b9e5-4481-9307-5cd7bfafdb40 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.434 [2024-12-06 23:49:00.792588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:49.434 [2024-12-06 23:49:00.792829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:49.434 [2024-12-06 23:49:00.792852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:49.434 [2024-12-06 23:49:00.793104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:49.434 NewBaseBdev 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.434 [2024-12-06 23:49:00.798586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:49.434 [2024-12-06 23:49:00.798647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:49.434 [2024-12-06 23:49:00.798874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.434 [ 00:14:49.434 { 00:14:49.434 "name": "NewBaseBdev", 00:14:49.434 "aliases": [ 00:14:49.434 "81519ce4-b9e5-4481-9307-5cd7bfafdb40" 00:14:49.434 ], 00:14:49.434 "product_name": "Malloc disk", 00:14:49.434 "block_size": 512, 00:14:49.434 "num_blocks": 65536, 00:14:49.434 "uuid": "81519ce4-b9e5-4481-9307-5cd7bfafdb40", 00:14:49.434 "assigned_rate_limits": { 00:14:49.434 "rw_ios_per_sec": 0, 00:14:49.434 "rw_mbytes_per_sec": 0, 00:14:49.434 "r_mbytes_per_sec": 0, 00:14:49.434 "w_mbytes_per_sec": 0 00:14:49.434 }, 00:14:49.434 "claimed": true, 00:14:49.434 "claim_type": "exclusive_write", 00:14:49.434 "zoned": false, 00:14:49.434 "supported_io_types": { 00:14:49.434 "read": true, 00:14:49.434 "write": true, 00:14:49.434 "unmap": true, 00:14:49.434 "flush": true, 00:14:49.434 "reset": true, 00:14:49.434 "nvme_admin": false, 00:14:49.434 "nvme_io": false, 00:14:49.434 "nvme_io_md": false, 00:14:49.434 "write_zeroes": true, 00:14:49.434 "zcopy": true, 00:14:49.434 "get_zone_info": false, 00:14:49.434 "zone_management": false, 00:14:49.434 "zone_append": false, 00:14:49.434 "compare": false, 00:14:49.434 "compare_and_write": false, 00:14:49.434 "abort": true, 00:14:49.434 "seek_hole": false, 00:14:49.434 "seek_data": false, 00:14:49.434 "copy": true, 00:14:49.434 "nvme_iov_md": false 00:14:49.434 }, 00:14:49.434 "memory_domains": [ 00:14:49.434 { 00:14:49.434 "dma_device_id": "system", 00:14:49.434 "dma_device_type": 1 00:14:49.434 }, 00:14:49.434 { 00:14:49.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.434 "dma_device_type": 2 00:14:49.434 } 00:14:49.434 ], 00:14:49.434 "driver_specific": {} 00:14:49.434 } 00:14:49.434 ] 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.434 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.434 "name": "Existed_Raid", 00:14:49.434 "uuid": "f41bdbbb-4f1a-4586-8b8c-feaa9628ec17", 00:14:49.434 "strip_size_kb": 64, 00:14:49.434 "state": "online", 00:14:49.434 "raid_level": "raid5f", 00:14:49.434 "superblock": true, 00:14:49.434 "num_base_bdevs": 3, 00:14:49.434 "num_base_bdevs_discovered": 3, 00:14:49.434 "num_base_bdevs_operational": 3, 00:14:49.434 "base_bdevs_list": [ 00:14:49.434 { 00:14:49.434 "name": "NewBaseBdev", 00:14:49.434 "uuid": "81519ce4-b9e5-4481-9307-5cd7bfafdb40", 00:14:49.434 "is_configured": true, 00:14:49.434 "data_offset": 2048, 00:14:49.434 "data_size": 63488 00:14:49.434 }, 00:14:49.434 { 00:14:49.434 "name": "BaseBdev2", 00:14:49.434 "uuid": "d4ce9265-f3d1-40c0-8d22-8010c9557dba", 00:14:49.434 "is_configured": true, 00:14:49.434 "data_offset": 2048, 00:14:49.434 "data_size": 63488 00:14:49.434 }, 00:14:49.434 { 00:14:49.434 "name": "BaseBdev3", 00:14:49.434 "uuid": "18c64716-69f1-47e4-a8fc-99d6758b358b", 00:14:49.434 "is_configured": true, 00:14:49.435 "data_offset": 2048, 00:14:49.435 "data_size": 63488 00:14:49.435 } 00:14:49.435 ] 00:14:49.435 }' 00:14:49.435 23:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.435 23:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.013 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:50.013 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:50.013 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:50.013 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:50.013 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:50.013 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:50.013 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:50.013 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:50.013 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.013 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.013 [2024-12-06 23:49:01.299940] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.013 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.013 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:50.013 "name": "Existed_Raid", 00:14:50.013 "aliases": [ 00:14:50.013 "f41bdbbb-4f1a-4586-8b8c-feaa9628ec17" 00:14:50.013 ], 00:14:50.013 "product_name": "Raid Volume", 00:14:50.013 "block_size": 512, 00:14:50.013 "num_blocks": 126976, 00:14:50.013 "uuid": "f41bdbbb-4f1a-4586-8b8c-feaa9628ec17", 00:14:50.013 "assigned_rate_limits": { 00:14:50.013 "rw_ios_per_sec": 0, 00:14:50.013 "rw_mbytes_per_sec": 0, 00:14:50.013 "r_mbytes_per_sec": 0, 00:14:50.013 "w_mbytes_per_sec": 0 00:14:50.013 }, 00:14:50.013 "claimed": false, 00:14:50.013 "zoned": false, 00:14:50.013 "supported_io_types": { 00:14:50.013 "read": true, 00:14:50.013 "write": true, 00:14:50.013 "unmap": false, 00:14:50.013 "flush": false, 00:14:50.013 "reset": true, 00:14:50.013 "nvme_admin": false, 00:14:50.013 "nvme_io": false, 00:14:50.013 "nvme_io_md": false, 00:14:50.013 "write_zeroes": true, 00:14:50.013 "zcopy": false, 00:14:50.014 "get_zone_info": false, 00:14:50.014 "zone_management": false, 00:14:50.014 "zone_append": false, 00:14:50.014 "compare": false, 00:14:50.014 "compare_and_write": false, 00:14:50.014 "abort": false, 00:14:50.014 "seek_hole": false, 00:14:50.014 "seek_data": false, 00:14:50.014 "copy": false, 00:14:50.014 "nvme_iov_md": false 00:14:50.014 }, 00:14:50.014 "driver_specific": { 00:14:50.014 "raid": { 00:14:50.014 "uuid": "f41bdbbb-4f1a-4586-8b8c-feaa9628ec17", 00:14:50.014 "strip_size_kb": 64, 00:14:50.014 "state": "online", 00:14:50.014 "raid_level": "raid5f", 00:14:50.014 "superblock": true, 00:14:50.014 "num_base_bdevs": 3, 00:14:50.014 "num_base_bdevs_discovered": 3, 00:14:50.014 "num_base_bdevs_operational": 3, 00:14:50.014 "base_bdevs_list": [ 00:14:50.014 { 00:14:50.014 "name": "NewBaseBdev", 00:14:50.014 "uuid": "81519ce4-b9e5-4481-9307-5cd7bfafdb40", 00:14:50.014 "is_configured": true, 00:14:50.014 "data_offset": 2048, 00:14:50.014 "data_size": 63488 00:14:50.014 }, 00:14:50.014 { 00:14:50.014 "name": "BaseBdev2", 00:14:50.014 "uuid": "d4ce9265-f3d1-40c0-8d22-8010c9557dba", 00:14:50.014 "is_configured": true, 00:14:50.014 "data_offset": 2048, 00:14:50.014 "data_size": 63488 00:14:50.014 }, 00:14:50.014 { 00:14:50.014 "name": "BaseBdev3", 00:14:50.014 "uuid": "18c64716-69f1-47e4-a8fc-99d6758b358b", 00:14:50.014 "is_configured": true, 00:14:50.014 "data_offset": 2048, 00:14:50.014 "data_size": 63488 00:14:50.014 } 00:14:50.014 ] 00:14:50.014 } 00:14:50.014 } 00:14:50.014 }' 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:50.014 BaseBdev2 00:14:50.014 BaseBdev3' 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.014 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.274 [2024-12-06 23:49:01.591690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.274 [2024-12-06 23:49:01.591712] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.274 [2024-12-06 23:49:01.591769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.274 [2024-12-06 23:49:01.592037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.274 [2024-12-06 23:49:01.592054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80422 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80422 ']' 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80422 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80422 00:14:50.274 killing process with pid 80422 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80422' 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80422 00:14:50.274 [2024-12-06 23:49:01.637840] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.274 23:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80422 00:14:50.535 [2024-12-06 23:49:01.921056] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.476 23:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:51.476 00:14:51.476 real 0m10.480s 00:14:51.476 user 0m16.666s 00:14:51.476 sys 0m1.978s 00:14:51.476 ************************************ 00:14:51.476 END TEST raid5f_state_function_test_sb 00:14:51.476 ************************************ 00:14:51.476 23:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.476 23:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.736 23:49:03 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:51.736 23:49:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:51.736 23:49:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.736 23:49:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.736 ************************************ 00:14:51.736 START TEST raid5f_superblock_test 00:14:51.736 ************************************ 00:14:51.736 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:51.736 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:51.736 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:51.736 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:51.736 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:51.736 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:51.736 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:51.736 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:51.736 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:51.736 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:51.736 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:51.736 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81043 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81043 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81043 ']' 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.737 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.737 [2024-12-06 23:49:03.142805] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:14:51.737 [2024-12-06 23:49:03.142994] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81043 ] 00:14:51.997 [2024-12-06 23:49:03.317640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.997 [2024-12-06 23:49:03.422609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.258 [2024-12-06 23:49:03.609252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.258 [2024-12-06 23:49:03.609295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.518 malloc1 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.518 [2024-12-06 23:49:03.988628] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:52.518 [2024-12-06 23:49:03.988744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.518 [2024-12-06 23:49:03.988799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:52.518 [2024-12-06 23:49:03.988832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.518 [2024-12-06 23:49:03.990878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.518 [2024-12-06 23:49:03.990944] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:52.518 pt1 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.518 23:49:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.518 malloc2 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.518 [2024-12-06 23:49:04.044616] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:52.518 [2024-12-06 23:49:04.044719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.518 [2024-12-06 23:49:04.044770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:52.518 [2024-12-06 23:49:04.044800] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.518 [2024-12-06 23:49:04.046815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.518 [2024-12-06 23:49:04.046883] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:52.518 pt2 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.518 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.779 malloc3 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.779 [2024-12-06 23:49:04.117766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:52.779 [2024-12-06 23:49:04.117817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.779 [2024-12-06 23:49:04.117855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:52.779 [2024-12-06 23:49:04.117864] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.779 [2024-12-06 23:49:04.119874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.779 [2024-12-06 23:49:04.119913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:52.779 pt3 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.779 [2024-12-06 23:49:04.129795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:52.779 [2024-12-06 23:49:04.131469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:52.779 [2024-12-06 23:49:04.131536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:52.779 [2024-12-06 23:49:04.131719] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:52.779 [2024-12-06 23:49:04.131740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:52.779 [2024-12-06 23:49:04.131961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:52.779 [2024-12-06 23:49:04.137155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:52.779 [2024-12-06 23:49:04.137217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:52.779 [2024-12-06 23:49:04.137439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.779 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.779 "name": "raid_bdev1", 00:14:52.779 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:52.779 "strip_size_kb": 64, 00:14:52.779 "state": "online", 00:14:52.779 "raid_level": "raid5f", 00:14:52.779 "superblock": true, 00:14:52.779 "num_base_bdevs": 3, 00:14:52.779 "num_base_bdevs_discovered": 3, 00:14:52.779 "num_base_bdevs_operational": 3, 00:14:52.779 "base_bdevs_list": [ 00:14:52.779 { 00:14:52.779 "name": "pt1", 00:14:52.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:52.779 "is_configured": true, 00:14:52.779 "data_offset": 2048, 00:14:52.779 "data_size": 63488 00:14:52.779 }, 00:14:52.779 { 00:14:52.779 "name": "pt2", 00:14:52.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.779 "is_configured": true, 00:14:52.779 "data_offset": 2048, 00:14:52.779 "data_size": 63488 00:14:52.779 }, 00:14:52.779 { 00:14:52.779 "name": "pt3", 00:14:52.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:52.779 "is_configured": true, 00:14:52.779 "data_offset": 2048, 00:14:52.779 "data_size": 63488 00:14:52.779 } 00:14:52.780 ] 00:14:52.780 }' 00:14:52.780 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.780 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.040 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:53.040 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:53.040 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.040 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.040 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.040 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.040 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.040 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.040 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.040 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.040 [2024-12-06 23:49:04.578913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.040 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.300 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.300 "name": "raid_bdev1", 00:14:53.300 "aliases": [ 00:14:53.300 "e02a869d-91fa-4855-a359-078be316b3be" 00:14:53.300 ], 00:14:53.300 "product_name": "Raid Volume", 00:14:53.300 "block_size": 512, 00:14:53.300 "num_blocks": 126976, 00:14:53.300 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:53.300 "assigned_rate_limits": { 00:14:53.300 "rw_ios_per_sec": 0, 00:14:53.300 "rw_mbytes_per_sec": 0, 00:14:53.300 "r_mbytes_per_sec": 0, 00:14:53.300 "w_mbytes_per_sec": 0 00:14:53.300 }, 00:14:53.300 "claimed": false, 00:14:53.300 "zoned": false, 00:14:53.300 "supported_io_types": { 00:14:53.300 "read": true, 00:14:53.300 "write": true, 00:14:53.300 "unmap": false, 00:14:53.300 "flush": false, 00:14:53.300 "reset": true, 00:14:53.300 "nvme_admin": false, 00:14:53.300 "nvme_io": false, 00:14:53.300 "nvme_io_md": false, 00:14:53.300 "write_zeroes": true, 00:14:53.300 "zcopy": false, 00:14:53.300 "get_zone_info": false, 00:14:53.300 "zone_management": false, 00:14:53.300 "zone_append": false, 00:14:53.300 "compare": false, 00:14:53.300 "compare_and_write": false, 00:14:53.300 "abort": false, 00:14:53.300 "seek_hole": false, 00:14:53.300 "seek_data": false, 00:14:53.300 "copy": false, 00:14:53.300 "nvme_iov_md": false 00:14:53.300 }, 00:14:53.300 "driver_specific": { 00:14:53.300 "raid": { 00:14:53.300 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:53.300 "strip_size_kb": 64, 00:14:53.300 "state": "online", 00:14:53.300 "raid_level": "raid5f", 00:14:53.300 "superblock": true, 00:14:53.300 "num_base_bdevs": 3, 00:14:53.300 "num_base_bdevs_discovered": 3, 00:14:53.300 "num_base_bdevs_operational": 3, 00:14:53.300 "base_bdevs_list": [ 00:14:53.300 { 00:14:53.300 "name": "pt1", 00:14:53.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.300 "is_configured": true, 00:14:53.300 "data_offset": 2048, 00:14:53.300 "data_size": 63488 00:14:53.300 }, 00:14:53.300 { 00:14:53.300 "name": "pt2", 00:14:53.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.300 "is_configured": true, 00:14:53.300 "data_offset": 2048, 00:14:53.300 "data_size": 63488 00:14:53.300 }, 00:14:53.300 { 00:14:53.300 "name": "pt3", 00:14:53.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:53.300 "is_configured": true, 00:14:53.300 "data_offset": 2048, 00:14:53.300 "data_size": 63488 00:14:53.300 } 00:14:53.300 ] 00:14:53.300 } 00:14:53.300 } 00:14:53.300 }' 00:14:53.300 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.300 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:53.300 pt2 00:14:53.300 pt3' 00:14:53.300 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.300 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.300 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.300 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.300 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:53.300 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.301 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.301 [2024-12-06 23:49:04.850400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e02a869d-91fa-4855-a359-078be316b3be 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e02a869d-91fa-4855-a359-078be316b3be ']' 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.561 [2024-12-06 23:49:04.890175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.561 [2024-12-06 23:49:04.890238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.561 [2024-12-06 23:49:04.890334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.561 [2024-12-06 23:49:04.890411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.561 [2024-12-06 23:49:04.890455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:53.561 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.562 23:49:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.562 [2024-12-06 23:49:05.014002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:53.562 [2024-12-06 23:49:05.015844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:53.562 [2024-12-06 23:49:05.015950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:53.562 [2024-12-06 23:49:05.016013] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:53.562 [2024-12-06 23:49:05.016112] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:53.562 [2024-12-06 23:49:05.016164] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:53.562 [2024-12-06 23:49:05.016213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.562 [2024-12-06 23:49:05.016239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:53.562 request: 00:14:53.562 { 00:14:53.562 "name": "raid_bdev1", 00:14:53.562 "raid_level": "raid5f", 00:14:53.562 "base_bdevs": [ 00:14:53.562 "malloc1", 00:14:53.562 "malloc2", 00:14:53.562 "malloc3" 00:14:53.562 ], 00:14:53.562 "strip_size_kb": 64, 00:14:53.562 "superblock": false, 00:14:53.562 "method": "bdev_raid_create", 00:14:53.562 "req_id": 1 00:14:53.562 } 00:14:53.562 Got JSON-RPC error response 00:14:53.562 response: 00:14:53.562 { 00:14:53.562 "code": -17, 00:14:53.562 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:53.562 } 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.562 [2024-12-06 23:49:05.077851] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:53.562 [2024-12-06 23:49:05.077934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.562 [2024-12-06 23:49:05.077967] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:53.562 [2024-12-06 23:49:05.077992] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.562 [2024-12-06 23:49:05.080034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.562 [2024-12-06 23:49:05.080102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:53.562 [2024-12-06 23:49:05.080204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:53.562 [2024-12-06 23:49:05.080275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:53.562 pt1 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.562 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.822 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.822 "name": "raid_bdev1", 00:14:53.822 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:53.822 "strip_size_kb": 64, 00:14:53.822 "state": "configuring", 00:14:53.822 "raid_level": "raid5f", 00:14:53.822 "superblock": true, 00:14:53.822 "num_base_bdevs": 3, 00:14:53.822 "num_base_bdevs_discovered": 1, 00:14:53.822 "num_base_bdevs_operational": 3, 00:14:53.822 "base_bdevs_list": [ 00:14:53.822 { 00:14:53.822 "name": "pt1", 00:14:53.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.822 "is_configured": true, 00:14:53.822 "data_offset": 2048, 00:14:53.822 "data_size": 63488 00:14:53.822 }, 00:14:53.822 { 00:14:53.822 "name": null, 00:14:53.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.822 "is_configured": false, 00:14:53.822 "data_offset": 2048, 00:14:53.822 "data_size": 63488 00:14:53.822 }, 00:14:53.822 { 00:14:53.822 "name": null, 00:14:53.822 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:53.822 "is_configured": false, 00:14:53.822 "data_offset": 2048, 00:14:53.822 "data_size": 63488 00:14:53.822 } 00:14:53.822 ] 00:14:53.822 }' 00:14:53.822 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.822 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.081 [2024-12-06 23:49:05.401316] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:54.081 [2024-12-06 23:49:05.401412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.081 [2024-12-06 23:49:05.401448] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:54.081 [2024-12-06 23:49:05.401493] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.081 [2024-12-06 23:49:05.401943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.081 [2024-12-06 23:49:05.402009] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:54.081 [2024-12-06 23:49:05.402119] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:54.081 [2024-12-06 23:49:05.402173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:54.081 pt2 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.081 [2024-12-06 23:49:05.409310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.081 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.082 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.082 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.082 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.082 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.082 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.082 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.082 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.082 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.082 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.082 "name": "raid_bdev1", 00:14:54.082 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:54.082 "strip_size_kb": 64, 00:14:54.082 "state": "configuring", 00:14:54.082 "raid_level": "raid5f", 00:14:54.082 "superblock": true, 00:14:54.082 "num_base_bdevs": 3, 00:14:54.082 "num_base_bdevs_discovered": 1, 00:14:54.082 "num_base_bdevs_operational": 3, 00:14:54.082 "base_bdevs_list": [ 00:14:54.082 { 00:14:54.082 "name": "pt1", 00:14:54.082 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:54.082 "is_configured": true, 00:14:54.082 "data_offset": 2048, 00:14:54.082 "data_size": 63488 00:14:54.082 }, 00:14:54.082 { 00:14:54.082 "name": null, 00:14:54.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.082 "is_configured": false, 00:14:54.082 "data_offset": 0, 00:14:54.082 "data_size": 63488 00:14:54.082 }, 00:14:54.082 { 00:14:54.082 "name": null, 00:14:54.082 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.082 "is_configured": false, 00:14:54.082 "data_offset": 2048, 00:14:54.082 "data_size": 63488 00:14:54.082 } 00:14:54.082 ] 00:14:54.082 }' 00:14:54.082 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.082 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.341 [2024-12-06 23:49:05.852532] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:54.341 [2024-12-06 23:49:05.852644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.341 [2024-12-06 23:49:05.852671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:54.341 [2024-12-06 23:49:05.852682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.341 [2024-12-06 23:49:05.853059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.341 [2024-12-06 23:49:05.853079] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:54.341 [2024-12-06 23:49:05.853138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:54.341 [2024-12-06 23:49:05.853158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:54.341 pt2 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.341 [2024-12-06 23:49:05.864513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:54.341 [2024-12-06 23:49:05.864561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.341 [2024-12-06 23:49:05.864573] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:54.341 [2024-12-06 23:49:05.864583] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.341 [2024-12-06 23:49:05.864913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.341 [2024-12-06 23:49:05.864934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:54.341 [2024-12-06 23:49:05.865001] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:54.341 [2024-12-06 23:49:05.865027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:54.341 [2024-12-06 23:49:05.865159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:54.341 [2024-12-06 23:49:05.865172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:54.341 [2024-12-06 23:49:05.865392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:54.341 [2024-12-06 23:49:05.870561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:54.341 pt3 00:14:54.341 [2024-12-06 23:49:05.870619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:54.341 [2024-12-06 23:49:05.870845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.341 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.600 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.600 "name": "raid_bdev1", 00:14:54.600 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:54.600 "strip_size_kb": 64, 00:14:54.600 "state": "online", 00:14:54.600 "raid_level": "raid5f", 00:14:54.600 "superblock": true, 00:14:54.600 "num_base_bdevs": 3, 00:14:54.600 "num_base_bdevs_discovered": 3, 00:14:54.600 "num_base_bdevs_operational": 3, 00:14:54.600 "base_bdevs_list": [ 00:14:54.600 { 00:14:54.600 "name": "pt1", 00:14:54.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:54.600 "is_configured": true, 00:14:54.600 "data_offset": 2048, 00:14:54.600 "data_size": 63488 00:14:54.600 }, 00:14:54.600 { 00:14:54.600 "name": "pt2", 00:14:54.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.600 "is_configured": true, 00:14:54.600 "data_offset": 2048, 00:14:54.600 "data_size": 63488 00:14:54.600 }, 00:14:54.600 { 00:14:54.600 "name": "pt3", 00:14:54.600 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.600 "is_configured": true, 00:14:54.600 "data_offset": 2048, 00:14:54.600 "data_size": 63488 00:14:54.600 } 00:14:54.600 ] 00:14:54.600 }' 00:14:54.600 23:49:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.600 23:49:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.859 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:54.859 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:54.859 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:54.859 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:54.859 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:54.859 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:54.859 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:54.859 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:54.859 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.859 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.859 [2024-12-06 23:49:06.332674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.859 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.859 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:54.860 "name": "raid_bdev1", 00:14:54.860 "aliases": [ 00:14:54.860 "e02a869d-91fa-4855-a359-078be316b3be" 00:14:54.860 ], 00:14:54.860 "product_name": "Raid Volume", 00:14:54.860 "block_size": 512, 00:14:54.860 "num_blocks": 126976, 00:14:54.860 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:54.860 "assigned_rate_limits": { 00:14:54.860 "rw_ios_per_sec": 0, 00:14:54.860 "rw_mbytes_per_sec": 0, 00:14:54.860 "r_mbytes_per_sec": 0, 00:14:54.860 "w_mbytes_per_sec": 0 00:14:54.860 }, 00:14:54.860 "claimed": false, 00:14:54.860 "zoned": false, 00:14:54.860 "supported_io_types": { 00:14:54.860 "read": true, 00:14:54.860 "write": true, 00:14:54.860 "unmap": false, 00:14:54.860 "flush": false, 00:14:54.860 "reset": true, 00:14:54.860 "nvme_admin": false, 00:14:54.860 "nvme_io": false, 00:14:54.860 "nvme_io_md": false, 00:14:54.860 "write_zeroes": true, 00:14:54.860 "zcopy": false, 00:14:54.860 "get_zone_info": false, 00:14:54.860 "zone_management": false, 00:14:54.860 "zone_append": false, 00:14:54.860 "compare": false, 00:14:54.860 "compare_and_write": false, 00:14:54.860 "abort": false, 00:14:54.860 "seek_hole": false, 00:14:54.860 "seek_data": false, 00:14:54.860 "copy": false, 00:14:54.860 "nvme_iov_md": false 00:14:54.860 }, 00:14:54.860 "driver_specific": { 00:14:54.860 "raid": { 00:14:54.860 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:54.860 "strip_size_kb": 64, 00:14:54.860 "state": "online", 00:14:54.860 "raid_level": "raid5f", 00:14:54.860 "superblock": true, 00:14:54.860 "num_base_bdevs": 3, 00:14:54.860 "num_base_bdevs_discovered": 3, 00:14:54.860 "num_base_bdevs_operational": 3, 00:14:54.860 "base_bdevs_list": [ 00:14:54.860 { 00:14:54.860 "name": "pt1", 00:14:54.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:54.860 "is_configured": true, 00:14:54.860 "data_offset": 2048, 00:14:54.860 "data_size": 63488 00:14:54.860 }, 00:14:54.860 { 00:14:54.860 "name": "pt2", 00:14:54.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.860 "is_configured": true, 00:14:54.860 "data_offset": 2048, 00:14:54.860 "data_size": 63488 00:14:54.860 }, 00:14:54.860 { 00:14:54.860 "name": "pt3", 00:14:54.860 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.860 "is_configured": true, 00:14:54.860 "data_offset": 2048, 00:14:54.860 "data_size": 63488 00:14:54.860 } 00:14:54.860 ] 00:14:54.860 } 00:14:54.860 } 00:14:54.860 }' 00:14:54.860 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:54.860 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:54.860 pt2 00:14:54.860 pt3' 00:14:54.860 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:55.119 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.120 [2024-12-06 23:49:06.608113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e02a869d-91fa-4855-a359-078be316b3be '!=' e02a869d-91fa-4855-a359-078be316b3be ']' 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.120 [2024-12-06 23:49:06.651927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.120 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.379 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.379 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.379 "name": "raid_bdev1", 00:14:55.379 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:55.379 "strip_size_kb": 64, 00:14:55.379 "state": "online", 00:14:55.379 "raid_level": "raid5f", 00:14:55.379 "superblock": true, 00:14:55.379 "num_base_bdevs": 3, 00:14:55.379 "num_base_bdevs_discovered": 2, 00:14:55.379 "num_base_bdevs_operational": 2, 00:14:55.379 "base_bdevs_list": [ 00:14:55.379 { 00:14:55.379 "name": null, 00:14:55.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.379 "is_configured": false, 00:14:55.379 "data_offset": 0, 00:14:55.379 "data_size": 63488 00:14:55.379 }, 00:14:55.379 { 00:14:55.379 "name": "pt2", 00:14:55.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.379 "is_configured": true, 00:14:55.379 "data_offset": 2048, 00:14:55.379 "data_size": 63488 00:14:55.379 }, 00:14:55.379 { 00:14:55.379 "name": "pt3", 00:14:55.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.379 "is_configured": true, 00:14:55.379 "data_offset": 2048, 00:14:55.379 "data_size": 63488 00:14:55.379 } 00:14:55.379 ] 00:14:55.379 }' 00:14:55.379 23:49:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.379 23:49:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.639 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:55.639 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.639 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.639 [2024-12-06 23:49:07.103268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.639 [2024-12-06 23:49:07.103332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.639 [2024-12-06 23:49:07.103426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.639 [2024-12-06 23:49:07.103498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.639 [2024-12-06 23:49:07.103545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:55.639 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.639 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.639 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:55.639 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.639 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.639 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.639 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.640 [2024-12-06 23:49:07.183149] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:55.640 [2024-12-06 23:49:07.183198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.640 [2024-12-06 23:49:07.183213] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:55.640 [2024-12-06 23:49:07.183222] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.640 [2024-12-06 23:49:07.185222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.640 [2024-12-06 23:49:07.185303] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:55.640 [2024-12-06 23:49:07.185377] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:55.640 [2024-12-06 23:49:07.185422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:55.640 pt2 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.640 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.900 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.900 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.900 "name": "raid_bdev1", 00:14:55.900 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:55.900 "strip_size_kb": 64, 00:14:55.900 "state": "configuring", 00:14:55.900 "raid_level": "raid5f", 00:14:55.900 "superblock": true, 00:14:55.900 "num_base_bdevs": 3, 00:14:55.900 "num_base_bdevs_discovered": 1, 00:14:55.900 "num_base_bdevs_operational": 2, 00:14:55.900 "base_bdevs_list": [ 00:14:55.900 { 00:14:55.900 "name": null, 00:14:55.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.900 "is_configured": false, 00:14:55.900 "data_offset": 2048, 00:14:55.900 "data_size": 63488 00:14:55.900 }, 00:14:55.900 { 00:14:55.900 "name": "pt2", 00:14:55.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.900 "is_configured": true, 00:14:55.900 "data_offset": 2048, 00:14:55.900 "data_size": 63488 00:14:55.900 }, 00:14:55.900 { 00:14:55.900 "name": null, 00:14:55.900 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.900 "is_configured": false, 00:14:55.900 "data_offset": 2048, 00:14:55.901 "data_size": 63488 00:14:55.901 } 00:14:55.901 ] 00:14:55.901 }' 00:14:55.901 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.901 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.162 [2024-12-06 23:49:07.618407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:56.162 [2024-12-06 23:49:07.618517] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.162 [2024-12-06 23:49:07.618554] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:56.162 [2024-12-06 23:49:07.618583] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.162 [2024-12-06 23:49:07.619051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.162 [2024-12-06 23:49:07.619109] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:56.162 [2024-12-06 23:49:07.619208] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:56.162 [2024-12-06 23:49:07.619260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:56.162 [2024-12-06 23:49:07.619405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:56.162 [2024-12-06 23:49:07.619444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:56.162 [2024-12-06 23:49:07.619733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:56.162 [2024-12-06 23:49:07.624557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:56.162 [2024-12-06 23:49:07.624614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:56.162 [2024-12-06 23:49:07.624958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.162 pt3 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.162 "name": "raid_bdev1", 00:14:56.162 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:56.162 "strip_size_kb": 64, 00:14:56.162 "state": "online", 00:14:56.162 "raid_level": "raid5f", 00:14:56.162 "superblock": true, 00:14:56.162 "num_base_bdevs": 3, 00:14:56.162 "num_base_bdevs_discovered": 2, 00:14:56.162 "num_base_bdevs_operational": 2, 00:14:56.162 "base_bdevs_list": [ 00:14:56.162 { 00:14:56.162 "name": null, 00:14:56.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.162 "is_configured": false, 00:14:56.162 "data_offset": 2048, 00:14:56.162 "data_size": 63488 00:14:56.162 }, 00:14:56.162 { 00:14:56.162 "name": "pt2", 00:14:56.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.162 "is_configured": true, 00:14:56.162 "data_offset": 2048, 00:14:56.162 "data_size": 63488 00:14:56.162 }, 00:14:56.162 { 00:14:56.162 "name": "pt3", 00:14:56.162 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.162 "is_configured": true, 00:14:56.162 "data_offset": 2048, 00:14:56.162 "data_size": 63488 00:14:56.162 } 00:14:56.162 ] 00:14:56.162 }' 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.162 23:49:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.733 [2024-12-06 23:49:08.082636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.733 [2024-12-06 23:49:08.082719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.733 [2024-12-06 23:49:08.082812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.733 [2024-12-06 23:49:08.082882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.733 [2024-12-06 23:49:08.082930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.733 [2024-12-06 23:49:08.170511] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:56.733 [2024-12-06 23:49:08.170560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.733 [2024-12-06 23:49:08.170576] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:56.733 [2024-12-06 23:49:08.170584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.733 [2024-12-06 23:49:08.172767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.733 [2024-12-06 23:49:08.172801] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:56.733 [2024-12-06 23:49:08.172867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:56.733 [2024-12-06 23:49:08.172909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:56.733 [2024-12-06 23:49:08.173053] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:56.733 [2024-12-06 23:49:08.173074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.733 [2024-12-06 23:49:08.173088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:56.733 [2024-12-06 23:49:08.173149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.733 pt1 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.733 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.733 "name": "raid_bdev1", 00:14:56.733 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:56.733 "strip_size_kb": 64, 00:14:56.733 "state": "configuring", 00:14:56.733 "raid_level": "raid5f", 00:14:56.733 "superblock": true, 00:14:56.733 "num_base_bdevs": 3, 00:14:56.733 "num_base_bdevs_discovered": 1, 00:14:56.733 "num_base_bdevs_operational": 2, 00:14:56.734 "base_bdevs_list": [ 00:14:56.734 { 00:14:56.734 "name": null, 00:14:56.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.734 "is_configured": false, 00:14:56.734 "data_offset": 2048, 00:14:56.734 "data_size": 63488 00:14:56.734 }, 00:14:56.734 { 00:14:56.734 "name": "pt2", 00:14:56.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.734 "is_configured": true, 00:14:56.734 "data_offset": 2048, 00:14:56.734 "data_size": 63488 00:14:56.734 }, 00:14:56.734 { 00:14:56.734 "name": null, 00:14:56.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.734 "is_configured": false, 00:14:56.734 "data_offset": 2048, 00:14:56.734 "data_size": 63488 00:14:56.734 } 00:14:56.734 ] 00:14:56.734 }' 00:14:56.734 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.734 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.304 [2024-12-06 23:49:08.669654] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:57.304 [2024-12-06 23:49:08.669778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.304 [2024-12-06 23:49:08.669814] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:57.304 [2024-12-06 23:49:08.669841] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.304 [2024-12-06 23:49:08.670304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.304 [2024-12-06 23:49:08.670363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:57.304 [2024-12-06 23:49:08.670465] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:57.304 [2024-12-06 23:49:08.670513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:57.304 [2024-12-06 23:49:08.670672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:57.304 [2024-12-06 23:49:08.670710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:57.304 [2024-12-06 23:49:08.670970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:57.304 [2024-12-06 23:49:08.676538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:57.304 [2024-12-06 23:49:08.676601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:57.304 [2024-12-06 23:49:08.676895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.304 pt3 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.304 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.305 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.305 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.305 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.305 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.305 "name": "raid_bdev1", 00:14:57.305 "uuid": "e02a869d-91fa-4855-a359-078be316b3be", 00:14:57.305 "strip_size_kb": 64, 00:14:57.305 "state": "online", 00:14:57.305 "raid_level": "raid5f", 00:14:57.305 "superblock": true, 00:14:57.305 "num_base_bdevs": 3, 00:14:57.305 "num_base_bdevs_discovered": 2, 00:14:57.305 "num_base_bdevs_operational": 2, 00:14:57.305 "base_bdevs_list": [ 00:14:57.305 { 00:14:57.305 "name": null, 00:14:57.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.305 "is_configured": false, 00:14:57.305 "data_offset": 2048, 00:14:57.305 "data_size": 63488 00:14:57.305 }, 00:14:57.305 { 00:14:57.305 "name": "pt2", 00:14:57.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.305 "is_configured": true, 00:14:57.305 "data_offset": 2048, 00:14:57.305 "data_size": 63488 00:14:57.305 }, 00:14:57.305 { 00:14:57.305 "name": "pt3", 00:14:57.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.305 "is_configured": true, 00:14:57.305 "data_offset": 2048, 00:14:57.305 "data_size": 63488 00:14:57.305 } 00:14:57.305 ] 00:14:57.305 }' 00:14:57.305 23:49:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.305 23:49:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.565 23:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:57.565 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.565 23:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:57.565 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.565 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.565 23:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:57.565 23:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.565 23:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:57.565 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.565 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.825 [2024-12-06 23:49:09.130719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e02a869d-91fa-4855-a359-078be316b3be '!=' e02a869d-91fa-4855-a359-078be316b3be ']' 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81043 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81043 ']' 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81043 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81043 00:14:57.825 killing process with pid 81043 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81043' 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81043 00:14:57.825 [2024-12-06 23:49:09.208299] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.825 [2024-12-06 23:49:09.208368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.825 [2024-12-06 23:49:09.208418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.825 [2024-12-06 23:49:09.208429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:57.825 23:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81043 00:14:58.085 [2024-12-06 23:49:09.489592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.024 23:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:59.024 00:14:59.024 real 0m7.494s 00:14:59.024 user 0m11.725s 00:14:59.024 sys 0m1.392s 00:14:59.024 23:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.024 23:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.024 ************************************ 00:14:59.024 END TEST raid5f_superblock_test 00:14:59.024 ************************************ 00:14:59.284 23:49:10 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:59.285 23:49:10 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:59.285 23:49:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:59.285 23:49:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.285 23:49:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.285 ************************************ 00:14:59.285 START TEST raid5f_rebuild_test 00:14:59.285 ************************************ 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81485 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81485 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81485 ']' 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.285 23:49:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.285 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:59.285 Zero copy mechanism will not be used. 00:14:59.285 [2024-12-06 23:49:10.742414] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:14:59.285 [2024-12-06 23:49:10.742543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81485 ] 00:14:59.545 [2024-12-06 23:49:10.922681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.545 [2024-12-06 23:49:11.023397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.805 [2024-12-06 23:49:11.216370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.805 [2024-12-06 23:49:11.216425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.066 BaseBdev1_malloc 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.066 [2024-12-06 23:49:11.600380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:00.066 [2024-12-06 23:49:11.600455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.066 [2024-12-06 23:49:11.600477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:00.066 [2024-12-06 23:49:11.600490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.066 [2024-12-06 23:49:11.602444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.066 [2024-12-06 23:49:11.602576] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:00.066 BaseBdev1 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.066 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 BaseBdev2_malloc 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 [2024-12-06 23:49:11.654050] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:00.327 [2024-12-06 23:49:11.654113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.327 [2024-12-06 23:49:11.654133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:00.327 [2024-12-06 23:49:11.654143] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.327 [2024-12-06 23:49:11.656103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.327 [2024-12-06 23:49:11.656145] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:00.327 BaseBdev2 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 BaseBdev3_malloc 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 [2024-12-06 23:49:11.743010] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:00.327 [2024-12-06 23:49:11.743065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.327 [2024-12-06 23:49:11.743086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:00.327 [2024-12-06 23:49:11.743096] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.327 [2024-12-06 23:49:11.745061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.327 [2024-12-06 23:49:11.745104] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:00.327 BaseBdev3 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 spare_malloc 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 spare_delay 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 [2024-12-06 23:49:11.803130] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:00.327 [2024-12-06 23:49:11.803264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.327 [2024-12-06 23:49:11.803285] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:00.327 [2024-12-06 23:49:11.803295] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.327 [2024-12-06 23:49:11.805346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.327 [2024-12-06 23:49:11.805389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:00.327 spare 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 [2024-12-06 23:49:11.815173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.327 [2024-12-06 23:49:11.816851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.327 [2024-12-06 23:49:11.816910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.327 [2024-12-06 23:49:11.816990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:00.327 [2024-12-06 23:49:11.817001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:00.327 [2024-12-06 23:49:11.817228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:00.327 [2024-12-06 23:49:11.822599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:00.327 [2024-12-06 23:49:11.822657] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:00.327 [2024-12-06 23:49:11.822909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.327 "name": "raid_bdev1", 00:15:00.327 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:00.327 "strip_size_kb": 64, 00:15:00.327 "state": "online", 00:15:00.327 "raid_level": "raid5f", 00:15:00.327 "superblock": false, 00:15:00.327 "num_base_bdevs": 3, 00:15:00.327 "num_base_bdevs_discovered": 3, 00:15:00.327 "num_base_bdevs_operational": 3, 00:15:00.327 "base_bdevs_list": [ 00:15:00.327 { 00:15:00.327 "name": "BaseBdev1", 00:15:00.327 "uuid": "fa576f69-1fd0-5de7-a2fb-ab124d29bbd4", 00:15:00.327 "is_configured": true, 00:15:00.327 "data_offset": 0, 00:15:00.327 "data_size": 65536 00:15:00.327 }, 00:15:00.327 { 00:15:00.327 "name": "BaseBdev2", 00:15:00.327 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:00.327 "is_configured": true, 00:15:00.327 "data_offset": 0, 00:15:00.327 "data_size": 65536 00:15:00.327 }, 00:15:00.327 { 00:15:00.327 "name": "BaseBdev3", 00:15:00.327 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:00.327 "is_configured": true, 00:15:00.327 "data_offset": 0, 00:15:00.327 "data_size": 65536 00:15:00.327 } 00:15:00.327 ] 00:15:00.327 }' 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.327 23:49:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.898 [2024-12-06 23:49:12.280305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.898 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:01.158 [2024-12-06 23:49:12.559770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:01.158 /dev/nbd0 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.158 1+0 records in 00:15:01.158 1+0 records out 00:15:01.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385829 s, 10.6 MB/s 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:01.158 23:49:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:01.742 512+0 records in 00:15:01.742 512+0 records out 00:15:01.742 67108864 bytes (67 MB, 64 MiB) copied, 0.587769 s, 114 MB/s 00:15:01.742 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:01.742 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.742 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:01.742 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.742 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:01.742 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.742 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:02.066 [2024-12-06 23:49:13.447869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.066 [2024-12-06 23:49:13.474292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.066 "name": "raid_bdev1", 00:15:02.066 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:02.066 "strip_size_kb": 64, 00:15:02.066 "state": "online", 00:15:02.066 "raid_level": "raid5f", 00:15:02.066 "superblock": false, 00:15:02.066 "num_base_bdevs": 3, 00:15:02.066 "num_base_bdevs_discovered": 2, 00:15:02.066 "num_base_bdevs_operational": 2, 00:15:02.066 "base_bdevs_list": [ 00:15:02.066 { 00:15:02.066 "name": null, 00:15:02.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.066 "is_configured": false, 00:15:02.066 "data_offset": 0, 00:15:02.066 "data_size": 65536 00:15:02.066 }, 00:15:02.066 { 00:15:02.066 "name": "BaseBdev2", 00:15:02.066 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:02.066 "is_configured": true, 00:15:02.066 "data_offset": 0, 00:15:02.066 "data_size": 65536 00:15:02.066 }, 00:15:02.066 { 00:15:02.066 "name": "BaseBdev3", 00:15:02.066 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:02.066 "is_configured": true, 00:15:02.066 "data_offset": 0, 00:15:02.066 "data_size": 65536 00:15:02.066 } 00:15:02.066 ] 00:15:02.066 }' 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.066 23:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.636 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:02.636 23:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.636 23:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.637 [2024-12-06 23:49:13.905581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.637 [2024-12-06 23:49:13.920871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:02.637 23:49:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.637 23:49:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:02.637 [2024-12-06 23:49:13.928341] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:03.575 23:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.575 23:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.575 23:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.575 23:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.575 23:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.575 23:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.575 23:49:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.575 23:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.575 23:49:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.575 23:49:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.575 23:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.575 "name": "raid_bdev1", 00:15:03.575 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:03.575 "strip_size_kb": 64, 00:15:03.575 "state": "online", 00:15:03.575 "raid_level": "raid5f", 00:15:03.575 "superblock": false, 00:15:03.575 "num_base_bdevs": 3, 00:15:03.575 "num_base_bdevs_discovered": 3, 00:15:03.575 "num_base_bdevs_operational": 3, 00:15:03.575 "process": { 00:15:03.575 "type": "rebuild", 00:15:03.575 "target": "spare", 00:15:03.575 "progress": { 00:15:03.575 "blocks": 20480, 00:15:03.575 "percent": 15 00:15:03.575 } 00:15:03.575 }, 00:15:03.575 "base_bdevs_list": [ 00:15:03.575 { 00:15:03.575 "name": "spare", 00:15:03.575 "uuid": "fbf83172-668c-5a4d-86a6-304d1cfa2bb4", 00:15:03.575 "is_configured": true, 00:15:03.575 "data_offset": 0, 00:15:03.575 "data_size": 65536 00:15:03.575 }, 00:15:03.575 { 00:15:03.575 "name": "BaseBdev2", 00:15:03.575 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:03.575 "is_configured": true, 00:15:03.575 "data_offset": 0, 00:15:03.575 "data_size": 65536 00:15:03.575 }, 00:15:03.575 { 00:15:03.575 "name": "BaseBdev3", 00:15:03.575 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:03.575 "is_configured": true, 00:15:03.575 "data_offset": 0, 00:15:03.575 "data_size": 65536 00:15:03.575 } 00:15:03.575 ] 00:15:03.575 }' 00:15:03.575 23:49:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.575 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.575 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.576 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.576 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:03.576 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.576 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.576 [2024-12-06 23:49:15.083568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.576 [2024-12-06 23:49:15.135715] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:03.576 [2024-12-06 23:49:15.135823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.576 [2024-12-06 23:49:15.135843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.576 [2024-12-06 23:49:15.135850] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.835 "name": "raid_bdev1", 00:15:03.835 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:03.835 "strip_size_kb": 64, 00:15:03.835 "state": "online", 00:15:03.835 "raid_level": "raid5f", 00:15:03.835 "superblock": false, 00:15:03.835 "num_base_bdevs": 3, 00:15:03.835 "num_base_bdevs_discovered": 2, 00:15:03.835 "num_base_bdevs_operational": 2, 00:15:03.835 "base_bdevs_list": [ 00:15:03.835 { 00:15:03.835 "name": null, 00:15:03.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.835 "is_configured": false, 00:15:03.835 "data_offset": 0, 00:15:03.835 "data_size": 65536 00:15:03.835 }, 00:15:03.835 { 00:15:03.835 "name": "BaseBdev2", 00:15:03.835 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:03.835 "is_configured": true, 00:15:03.835 "data_offset": 0, 00:15:03.835 "data_size": 65536 00:15:03.835 }, 00:15:03.835 { 00:15:03.835 "name": "BaseBdev3", 00:15:03.835 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:03.835 "is_configured": true, 00:15:03.835 "data_offset": 0, 00:15:03.835 "data_size": 65536 00:15:03.835 } 00:15:03.835 ] 00:15:03.835 }' 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.835 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.095 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.095 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.095 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.095 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.095 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.095 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.095 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.095 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.095 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.095 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.095 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.095 "name": "raid_bdev1", 00:15:04.095 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:04.095 "strip_size_kb": 64, 00:15:04.095 "state": "online", 00:15:04.095 "raid_level": "raid5f", 00:15:04.095 "superblock": false, 00:15:04.095 "num_base_bdevs": 3, 00:15:04.095 "num_base_bdevs_discovered": 2, 00:15:04.095 "num_base_bdevs_operational": 2, 00:15:04.095 "base_bdevs_list": [ 00:15:04.095 { 00:15:04.095 "name": null, 00:15:04.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.095 "is_configured": false, 00:15:04.095 "data_offset": 0, 00:15:04.095 "data_size": 65536 00:15:04.095 }, 00:15:04.095 { 00:15:04.095 "name": "BaseBdev2", 00:15:04.095 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:04.095 "is_configured": true, 00:15:04.095 "data_offset": 0, 00:15:04.095 "data_size": 65536 00:15:04.095 }, 00:15:04.095 { 00:15:04.095 "name": "BaseBdev3", 00:15:04.095 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:04.095 "is_configured": true, 00:15:04.095 "data_offset": 0, 00:15:04.095 "data_size": 65536 00:15:04.095 } 00:15:04.095 ] 00:15:04.095 }' 00:15:04.095 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.354 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.354 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.354 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.354 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:04.354 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.354 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.354 [2024-12-06 23:49:15.756084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.354 [2024-12-06 23:49:15.770550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:04.354 23:49:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.355 23:49:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:04.355 [2024-12-06 23:49:15.777483] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:05.293 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.293 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.293 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.293 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.293 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.293 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.293 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.293 23:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.293 23:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.293 23:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.293 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.293 "name": "raid_bdev1", 00:15:05.293 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:05.293 "strip_size_kb": 64, 00:15:05.293 "state": "online", 00:15:05.293 "raid_level": "raid5f", 00:15:05.293 "superblock": false, 00:15:05.293 "num_base_bdevs": 3, 00:15:05.293 "num_base_bdevs_discovered": 3, 00:15:05.293 "num_base_bdevs_operational": 3, 00:15:05.293 "process": { 00:15:05.293 "type": "rebuild", 00:15:05.293 "target": "spare", 00:15:05.293 "progress": { 00:15:05.293 "blocks": 20480, 00:15:05.293 "percent": 15 00:15:05.293 } 00:15:05.293 }, 00:15:05.293 "base_bdevs_list": [ 00:15:05.293 { 00:15:05.293 "name": "spare", 00:15:05.293 "uuid": "fbf83172-668c-5a4d-86a6-304d1cfa2bb4", 00:15:05.293 "is_configured": true, 00:15:05.293 "data_offset": 0, 00:15:05.293 "data_size": 65536 00:15:05.293 }, 00:15:05.293 { 00:15:05.293 "name": "BaseBdev2", 00:15:05.293 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:05.293 "is_configured": true, 00:15:05.293 "data_offset": 0, 00:15:05.293 "data_size": 65536 00:15:05.293 }, 00:15:05.293 { 00:15:05.293 "name": "BaseBdev3", 00:15:05.293 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:05.293 "is_configured": true, 00:15:05.293 "data_offset": 0, 00:15:05.293 "data_size": 65536 00:15:05.293 } 00:15:05.293 ] 00:15:05.293 }' 00:15:05.293 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=545 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.552 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.552 "name": "raid_bdev1", 00:15:05.553 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:05.553 "strip_size_kb": 64, 00:15:05.553 "state": "online", 00:15:05.553 "raid_level": "raid5f", 00:15:05.553 "superblock": false, 00:15:05.553 "num_base_bdevs": 3, 00:15:05.553 "num_base_bdevs_discovered": 3, 00:15:05.553 "num_base_bdevs_operational": 3, 00:15:05.553 "process": { 00:15:05.553 "type": "rebuild", 00:15:05.553 "target": "spare", 00:15:05.553 "progress": { 00:15:05.553 "blocks": 22528, 00:15:05.553 "percent": 17 00:15:05.553 } 00:15:05.553 }, 00:15:05.553 "base_bdevs_list": [ 00:15:05.553 { 00:15:05.553 "name": "spare", 00:15:05.553 "uuid": "fbf83172-668c-5a4d-86a6-304d1cfa2bb4", 00:15:05.553 "is_configured": true, 00:15:05.553 "data_offset": 0, 00:15:05.553 "data_size": 65536 00:15:05.553 }, 00:15:05.553 { 00:15:05.553 "name": "BaseBdev2", 00:15:05.553 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:05.553 "is_configured": true, 00:15:05.553 "data_offset": 0, 00:15:05.553 "data_size": 65536 00:15:05.553 }, 00:15:05.553 { 00:15:05.553 "name": "BaseBdev3", 00:15:05.553 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:05.553 "is_configured": true, 00:15:05.553 "data_offset": 0, 00:15:05.553 "data_size": 65536 00:15:05.553 } 00:15:05.553 ] 00:15:05.553 }' 00:15:05.553 23:49:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.553 23:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.553 23:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.553 23:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.553 23:49:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.932 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.932 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.932 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.932 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.932 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.932 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.932 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.933 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.933 23:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.933 23:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.933 23:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.933 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.933 "name": "raid_bdev1", 00:15:06.933 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:06.933 "strip_size_kb": 64, 00:15:06.933 "state": "online", 00:15:06.933 "raid_level": "raid5f", 00:15:06.933 "superblock": false, 00:15:06.933 "num_base_bdevs": 3, 00:15:06.933 "num_base_bdevs_discovered": 3, 00:15:06.933 "num_base_bdevs_operational": 3, 00:15:06.933 "process": { 00:15:06.933 "type": "rebuild", 00:15:06.933 "target": "spare", 00:15:06.933 "progress": { 00:15:06.933 "blocks": 47104, 00:15:06.933 "percent": 35 00:15:06.933 } 00:15:06.933 }, 00:15:06.933 "base_bdevs_list": [ 00:15:06.933 { 00:15:06.933 "name": "spare", 00:15:06.933 "uuid": "fbf83172-668c-5a4d-86a6-304d1cfa2bb4", 00:15:06.933 "is_configured": true, 00:15:06.933 "data_offset": 0, 00:15:06.933 "data_size": 65536 00:15:06.933 }, 00:15:06.933 { 00:15:06.933 "name": "BaseBdev2", 00:15:06.933 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:06.933 "is_configured": true, 00:15:06.933 "data_offset": 0, 00:15:06.933 "data_size": 65536 00:15:06.933 }, 00:15:06.933 { 00:15:06.933 "name": "BaseBdev3", 00:15:06.933 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:06.933 "is_configured": true, 00:15:06.933 "data_offset": 0, 00:15:06.933 "data_size": 65536 00:15:06.933 } 00:15:06.933 ] 00:15:06.933 }' 00:15:06.933 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.933 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.933 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.933 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.933 23:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.870 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.870 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.870 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.870 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.870 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.870 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.870 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.870 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.870 23:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.870 23:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.870 23:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.870 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.870 "name": "raid_bdev1", 00:15:07.870 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:07.870 "strip_size_kb": 64, 00:15:07.870 "state": "online", 00:15:07.870 "raid_level": "raid5f", 00:15:07.870 "superblock": false, 00:15:07.870 "num_base_bdevs": 3, 00:15:07.870 "num_base_bdevs_discovered": 3, 00:15:07.870 "num_base_bdevs_operational": 3, 00:15:07.870 "process": { 00:15:07.870 "type": "rebuild", 00:15:07.870 "target": "spare", 00:15:07.870 "progress": { 00:15:07.870 "blocks": 69632, 00:15:07.870 "percent": 53 00:15:07.870 } 00:15:07.870 }, 00:15:07.870 "base_bdevs_list": [ 00:15:07.870 { 00:15:07.870 "name": "spare", 00:15:07.870 "uuid": "fbf83172-668c-5a4d-86a6-304d1cfa2bb4", 00:15:07.870 "is_configured": true, 00:15:07.870 "data_offset": 0, 00:15:07.870 "data_size": 65536 00:15:07.870 }, 00:15:07.870 { 00:15:07.870 "name": "BaseBdev2", 00:15:07.870 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:07.870 "is_configured": true, 00:15:07.870 "data_offset": 0, 00:15:07.870 "data_size": 65536 00:15:07.870 }, 00:15:07.870 { 00:15:07.870 "name": "BaseBdev3", 00:15:07.870 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:07.871 "is_configured": true, 00:15:07.871 "data_offset": 0, 00:15:07.871 "data_size": 65536 00:15:07.871 } 00:15:07.871 ] 00:15:07.871 }' 00:15:07.871 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.871 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.871 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.871 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.871 23:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.252 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.252 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.252 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.253 "name": "raid_bdev1", 00:15:09.253 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:09.253 "strip_size_kb": 64, 00:15:09.253 "state": "online", 00:15:09.253 "raid_level": "raid5f", 00:15:09.253 "superblock": false, 00:15:09.253 "num_base_bdevs": 3, 00:15:09.253 "num_base_bdevs_discovered": 3, 00:15:09.253 "num_base_bdevs_operational": 3, 00:15:09.253 "process": { 00:15:09.253 "type": "rebuild", 00:15:09.253 "target": "spare", 00:15:09.253 "progress": { 00:15:09.253 "blocks": 92160, 00:15:09.253 "percent": 70 00:15:09.253 } 00:15:09.253 }, 00:15:09.253 "base_bdevs_list": [ 00:15:09.253 { 00:15:09.253 "name": "spare", 00:15:09.253 "uuid": "fbf83172-668c-5a4d-86a6-304d1cfa2bb4", 00:15:09.253 "is_configured": true, 00:15:09.253 "data_offset": 0, 00:15:09.253 "data_size": 65536 00:15:09.253 }, 00:15:09.253 { 00:15:09.253 "name": "BaseBdev2", 00:15:09.253 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:09.253 "is_configured": true, 00:15:09.253 "data_offset": 0, 00:15:09.253 "data_size": 65536 00:15:09.253 }, 00:15:09.253 { 00:15:09.253 "name": "BaseBdev3", 00:15:09.253 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:09.253 "is_configured": true, 00:15:09.253 "data_offset": 0, 00:15:09.253 "data_size": 65536 00:15:09.253 } 00:15:09.253 ] 00:15:09.253 }' 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.253 23:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.194 "name": "raid_bdev1", 00:15:10.194 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:10.194 "strip_size_kb": 64, 00:15:10.194 "state": "online", 00:15:10.194 "raid_level": "raid5f", 00:15:10.194 "superblock": false, 00:15:10.194 "num_base_bdevs": 3, 00:15:10.194 "num_base_bdevs_discovered": 3, 00:15:10.194 "num_base_bdevs_operational": 3, 00:15:10.194 "process": { 00:15:10.194 "type": "rebuild", 00:15:10.194 "target": "spare", 00:15:10.194 "progress": { 00:15:10.194 "blocks": 116736, 00:15:10.194 "percent": 89 00:15:10.194 } 00:15:10.194 }, 00:15:10.194 "base_bdevs_list": [ 00:15:10.194 { 00:15:10.194 "name": "spare", 00:15:10.194 "uuid": "fbf83172-668c-5a4d-86a6-304d1cfa2bb4", 00:15:10.194 "is_configured": true, 00:15:10.194 "data_offset": 0, 00:15:10.194 "data_size": 65536 00:15:10.194 }, 00:15:10.194 { 00:15:10.194 "name": "BaseBdev2", 00:15:10.194 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:10.194 "is_configured": true, 00:15:10.194 "data_offset": 0, 00:15:10.194 "data_size": 65536 00:15:10.194 }, 00:15:10.194 { 00:15:10.194 "name": "BaseBdev3", 00:15:10.194 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:10.194 "is_configured": true, 00:15:10.194 "data_offset": 0, 00:15:10.194 "data_size": 65536 00:15:10.194 } 00:15:10.194 ] 00:15:10.194 }' 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.194 23:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.764 [2024-12-06 23:49:22.213044] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:10.764 [2024-12-06 23:49:22.213119] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:10.764 [2024-12-06 23:49:22.213160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.335 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.335 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.335 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.335 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.335 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.335 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.336 "name": "raid_bdev1", 00:15:11.336 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:11.336 "strip_size_kb": 64, 00:15:11.336 "state": "online", 00:15:11.336 "raid_level": "raid5f", 00:15:11.336 "superblock": false, 00:15:11.336 "num_base_bdevs": 3, 00:15:11.336 "num_base_bdevs_discovered": 3, 00:15:11.336 "num_base_bdevs_operational": 3, 00:15:11.336 "base_bdevs_list": [ 00:15:11.336 { 00:15:11.336 "name": "spare", 00:15:11.336 "uuid": "fbf83172-668c-5a4d-86a6-304d1cfa2bb4", 00:15:11.336 "is_configured": true, 00:15:11.336 "data_offset": 0, 00:15:11.336 "data_size": 65536 00:15:11.336 }, 00:15:11.336 { 00:15:11.336 "name": "BaseBdev2", 00:15:11.336 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:11.336 "is_configured": true, 00:15:11.336 "data_offset": 0, 00:15:11.336 "data_size": 65536 00:15:11.336 }, 00:15:11.336 { 00:15:11.336 "name": "BaseBdev3", 00:15:11.336 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:11.336 "is_configured": true, 00:15:11.336 "data_offset": 0, 00:15:11.336 "data_size": 65536 00:15:11.336 } 00:15:11.336 ] 00:15:11.336 }' 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.336 "name": "raid_bdev1", 00:15:11.336 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:11.336 "strip_size_kb": 64, 00:15:11.336 "state": "online", 00:15:11.336 "raid_level": "raid5f", 00:15:11.336 "superblock": false, 00:15:11.336 "num_base_bdevs": 3, 00:15:11.336 "num_base_bdevs_discovered": 3, 00:15:11.336 "num_base_bdevs_operational": 3, 00:15:11.336 "base_bdevs_list": [ 00:15:11.336 { 00:15:11.336 "name": "spare", 00:15:11.336 "uuid": "fbf83172-668c-5a4d-86a6-304d1cfa2bb4", 00:15:11.336 "is_configured": true, 00:15:11.336 "data_offset": 0, 00:15:11.336 "data_size": 65536 00:15:11.336 }, 00:15:11.336 { 00:15:11.336 "name": "BaseBdev2", 00:15:11.336 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:11.336 "is_configured": true, 00:15:11.336 "data_offset": 0, 00:15:11.336 "data_size": 65536 00:15:11.336 }, 00:15:11.336 { 00:15:11.336 "name": "BaseBdev3", 00:15:11.336 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:11.336 "is_configured": true, 00:15:11.336 "data_offset": 0, 00:15:11.336 "data_size": 65536 00:15:11.336 } 00:15:11.336 ] 00:15:11.336 }' 00:15:11.336 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.597 "name": "raid_bdev1", 00:15:11.597 "uuid": "8a385a6a-4536-4a6c-82ee-43d4339a299e", 00:15:11.597 "strip_size_kb": 64, 00:15:11.597 "state": "online", 00:15:11.597 "raid_level": "raid5f", 00:15:11.597 "superblock": false, 00:15:11.597 "num_base_bdevs": 3, 00:15:11.597 "num_base_bdevs_discovered": 3, 00:15:11.597 "num_base_bdevs_operational": 3, 00:15:11.597 "base_bdevs_list": [ 00:15:11.597 { 00:15:11.597 "name": "spare", 00:15:11.597 "uuid": "fbf83172-668c-5a4d-86a6-304d1cfa2bb4", 00:15:11.597 "is_configured": true, 00:15:11.597 "data_offset": 0, 00:15:11.597 "data_size": 65536 00:15:11.597 }, 00:15:11.597 { 00:15:11.597 "name": "BaseBdev2", 00:15:11.597 "uuid": "ced72f1d-e77b-5f36-ba6d-0a2e18fde364", 00:15:11.597 "is_configured": true, 00:15:11.597 "data_offset": 0, 00:15:11.597 "data_size": 65536 00:15:11.597 }, 00:15:11.597 { 00:15:11.597 "name": "BaseBdev3", 00:15:11.597 "uuid": "a7403bf7-a4a4-518c-8b82-f6dce006c76c", 00:15:11.597 "is_configured": true, 00:15:11.597 "data_offset": 0, 00:15:11.597 "data_size": 65536 00:15:11.597 } 00:15:11.597 ] 00:15:11.597 }' 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.597 23:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.857 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.857 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.857 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.857 [2024-12-06 23:49:23.412225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.857 [2024-12-06 23:49:23.412307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.857 [2024-12-06 23:49:23.412422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.857 [2024-12-06 23:49:23.412514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.857 [2024-12-06 23:49:23.412561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:11.857 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.118 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:12.118 /dev/nbd0 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.379 1+0 records in 00:15:12.379 1+0 records out 00:15:12.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447917 s, 9.1 MB/s 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.379 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:12.379 /dev/nbd1 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.640 1+0 records in 00:15:12.640 1+0 records out 00:15:12.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451522 s, 9.1 MB/s 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.640 23:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:12.640 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:12.640 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.640 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:12.640 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.640 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:12.640 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.640 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:12.900 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.900 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.900 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.900 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.900 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.900 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.900 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:12.900 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.900 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.900 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81485 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81485 ']' 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81485 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81485 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:13.160 killing process with pid 81485 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81485' 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81485 00:15:13.160 Received shutdown signal, test time was about 60.000000 seconds 00:15:13.160 00:15:13.160 Latency(us) 00:15:13.160 [2024-12-06T23:49:24.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.160 [2024-12-06T23:49:24.723Z] =================================================================================================================== 00:15:13.160 [2024-12-06T23:49:24.723Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:13.160 [2024-12-06 23:49:24.670808] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:13.160 23:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81485 00:15:13.731 [2024-12-06 23:49:25.038894] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:14.670 00:15:14.670 real 0m15.450s 00:15:14.670 user 0m18.867s 00:15:14.670 sys 0m2.320s 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.670 ************************************ 00:15:14.670 END TEST raid5f_rebuild_test 00:15:14.670 ************************************ 00:15:14.670 23:49:26 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:14.670 23:49:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:14.670 23:49:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.670 23:49:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:14.670 ************************************ 00:15:14.670 START TEST raid5f_rebuild_test_sb 00:15:14.670 ************************************ 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81926 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81926 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81926 ']' 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.670 23:49:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.930 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:14.930 Zero copy mechanism will not be used. 00:15:14.930 [2024-12-06 23:49:26.265606] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:15:14.930 [2024-12-06 23:49:26.265763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81926 ] 00:15:14.930 [2024-12-06 23:49:26.438686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.191 [2024-12-06 23:49:26.544969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.191 [2024-12-06 23:49:26.713285] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.191 [2024-12-06 23:49:26.713328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.763 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.763 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:15.763 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:15.763 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:15.763 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.763 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.763 BaseBdev1_malloc 00:15:15.763 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.764 [2024-12-06 23:49:27.132671] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:15.764 [2024-12-06 23:49:27.132733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.764 [2024-12-06 23:49:27.132768] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:15.764 [2024-12-06 23:49:27.132780] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.764 [2024-12-06 23:49:27.134794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.764 [2024-12-06 23:49:27.134831] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:15.764 BaseBdev1 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.764 BaseBdev2_malloc 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.764 [2024-12-06 23:49:27.184766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:15.764 [2024-12-06 23:49:27.184830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.764 [2024-12-06 23:49:27.184867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:15.764 [2024-12-06 23:49:27.184878] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.764 [2024-12-06 23:49:27.186861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.764 [2024-12-06 23:49:27.186896] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:15.764 BaseBdev2 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.764 BaseBdev3_malloc 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.764 [2024-12-06 23:49:27.261217] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:15.764 [2024-12-06 23:49:27.261270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.764 [2024-12-06 23:49:27.261304] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:15.764 [2024-12-06 23:49:27.261314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.764 [2024-12-06 23:49:27.263251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.764 [2024-12-06 23:49:27.263291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:15.764 BaseBdev3 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.764 spare_malloc 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.764 spare_delay 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.764 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.026 [2024-12-06 23:49:27.328059] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:16.026 [2024-12-06 23:49:27.328117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.026 [2024-12-06 23:49:27.328148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:16.026 [2024-12-06 23:49:27.328159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.026 [2024-12-06 23:49:27.330145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.026 [2024-12-06 23:49:27.330202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:16.026 spare 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.026 [2024-12-06 23:49:27.340104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.026 [2024-12-06 23:49:27.341840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.026 [2024-12-06 23:49:27.341931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.026 [2024-12-06 23:49:27.342095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:16.026 [2024-12-06 23:49:27.342106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:16.026 [2024-12-06 23:49:27.342341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:16.026 [2024-12-06 23:49:27.347846] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:16.026 [2024-12-06 23:49:27.347887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:16.026 [2024-12-06 23:49:27.348077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.026 "name": "raid_bdev1", 00:15:16.026 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:16.026 "strip_size_kb": 64, 00:15:16.026 "state": "online", 00:15:16.026 "raid_level": "raid5f", 00:15:16.026 "superblock": true, 00:15:16.026 "num_base_bdevs": 3, 00:15:16.026 "num_base_bdevs_discovered": 3, 00:15:16.026 "num_base_bdevs_operational": 3, 00:15:16.026 "base_bdevs_list": [ 00:15:16.026 { 00:15:16.026 "name": "BaseBdev1", 00:15:16.026 "uuid": "acedc0ad-1894-5b6e-adec-91541795e84d", 00:15:16.026 "is_configured": true, 00:15:16.026 "data_offset": 2048, 00:15:16.026 "data_size": 63488 00:15:16.026 }, 00:15:16.026 { 00:15:16.026 "name": "BaseBdev2", 00:15:16.026 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:16.026 "is_configured": true, 00:15:16.026 "data_offset": 2048, 00:15:16.026 "data_size": 63488 00:15:16.026 }, 00:15:16.026 { 00:15:16.026 "name": "BaseBdev3", 00:15:16.026 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:16.026 "is_configured": true, 00:15:16.026 "data_offset": 2048, 00:15:16.026 "data_size": 63488 00:15:16.026 } 00:15:16.026 ] 00:15:16.026 }' 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.026 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.286 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:16.286 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:16.286 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.286 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.287 [2024-12-06 23:49:27.825632] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.287 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.287 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:16.547 23:49:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:16.547 [2024-12-06 23:49:28.053107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:16.547 /dev/nbd0 00:15:16.547 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:16.547 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:16.547 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:16.547 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:16.547 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:16.547 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:16.547 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:16.547 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:16.547 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:16.547 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:16.547 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.807 1+0 records in 00:15:16.807 1+0 records out 00:15:16.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467667 s, 8.8 MB/s 00:15:16.807 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.807 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:16.807 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.807 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:16.807 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:16.807 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.807 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:16.807 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:16.807 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:16.807 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:16.807 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:17.067 496+0 records in 00:15:17.067 496+0 records out 00:15:17.067 65011712 bytes (65 MB, 62 MiB) copied, 0.312238 s, 208 MB/s 00:15:17.067 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:17.067 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.067 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:17.067 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.067 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:17.067 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.067 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:17.327 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.327 [2024-12-06 23:49:28.651827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.327 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.327 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.327 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.327 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.327 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.327 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:17.327 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.327 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:17.327 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.327 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.327 [2024-12-06 23:49:28.666432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.328 "name": "raid_bdev1", 00:15:17.328 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:17.328 "strip_size_kb": 64, 00:15:17.328 "state": "online", 00:15:17.328 "raid_level": "raid5f", 00:15:17.328 "superblock": true, 00:15:17.328 "num_base_bdevs": 3, 00:15:17.328 "num_base_bdevs_discovered": 2, 00:15:17.328 "num_base_bdevs_operational": 2, 00:15:17.328 "base_bdevs_list": [ 00:15:17.328 { 00:15:17.328 "name": null, 00:15:17.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.328 "is_configured": false, 00:15:17.328 "data_offset": 0, 00:15:17.328 "data_size": 63488 00:15:17.328 }, 00:15:17.328 { 00:15:17.328 "name": "BaseBdev2", 00:15:17.328 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:17.328 "is_configured": true, 00:15:17.328 "data_offset": 2048, 00:15:17.328 "data_size": 63488 00:15:17.328 }, 00:15:17.328 { 00:15:17.328 "name": "BaseBdev3", 00:15:17.328 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:17.328 "is_configured": true, 00:15:17.328 "data_offset": 2048, 00:15:17.328 "data_size": 63488 00:15:17.328 } 00:15:17.328 ] 00:15:17.328 }' 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.328 23:49:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.588 23:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:17.588 23:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.588 23:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.588 [2024-12-06 23:49:29.145712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.848 [2024-12-06 23:49:29.160900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:17.848 23:49:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.848 23:49:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:17.848 [2024-12-06 23:49:29.168323] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.789 "name": "raid_bdev1", 00:15:18.789 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:18.789 "strip_size_kb": 64, 00:15:18.789 "state": "online", 00:15:18.789 "raid_level": "raid5f", 00:15:18.789 "superblock": true, 00:15:18.789 "num_base_bdevs": 3, 00:15:18.789 "num_base_bdevs_discovered": 3, 00:15:18.789 "num_base_bdevs_operational": 3, 00:15:18.789 "process": { 00:15:18.789 "type": "rebuild", 00:15:18.789 "target": "spare", 00:15:18.789 "progress": { 00:15:18.789 "blocks": 20480, 00:15:18.789 "percent": 16 00:15:18.789 } 00:15:18.789 }, 00:15:18.789 "base_bdevs_list": [ 00:15:18.789 { 00:15:18.789 "name": "spare", 00:15:18.789 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:18.789 "is_configured": true, 00:15:18.789 "data_offset": 2048, 00:15:18.789 "data_size": 63488 00:15:18.789 }, 00:15:18.789 { 00:15:18.789 "name": "BaseBdev2", 00:15:18.789 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:18.789 "is_configured": true, 00:15:18.789 "data_offset": 2048, 00:15:18.789 "data_size": 63488 00:15:18.789 }, 00:15:18.789 { 00:15:18.789 "name": "BaseBdev3", 00:15:18.789 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:18.789 "is_configured": true, 00:15:18.789 "data_offset": 2048, 00:15:18.789 "data_size": 63488 00:15:18.789 } 00:15:18.789 ] 00:15:18.789 }' 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.789 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.789 [2024-12-06 23:49:30.275795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.050 [2024-12-06 23:49:30.375557] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:19.050 [2024-12-06 23:49:30.375631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.050 [2024-12-06 23:49:30.375649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.050 [2024-12-06 23:49:30.375671] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.050 "name": "raid_bdev1", 00:15:19.050 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:19.050 "strip_size_kb": 64, 00:15:19.050 "state": "online", 00:15:19.050 "raid_level": "raid5f", 00:15:19.050 "superblock": true, 00:15:19.050 "num_base_bdevs": 3, 00:15:19.050 "num_base_bdevs_discovered": 2, 00:15:19.050 "num_base_bdevs_operational": 2, 00:15:19.050 "base_bdevs_list": [ 00:15:19.050 { 00:15:19.050 "name": null, 00:15:19.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.050 "is_configured": false, 00:15:19.050 "data_offset": 0, 00:15:19.050 "data_size": 63488 00:15:19.050 }, 00:15:19.050 { 00:15:19.050 "name": "BaseBdev2", 00:15:19.050 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:19.050 "is_configured": true, 00:15:19.050 "data_offset": 2048, 00:15:19.050 "data_size": 63488 00:15:19.050 }, 00:15:19.050 { 00:15:19.050 "name": "BaseBdev3", 00:15:19.050 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:19.050 "is_configured": true, 00:15:19.050 "data_offset": 2048, 00:15:19.050 "data_size": 63488 00:15:19.050 } 00:15:19.050 ] 00:15:19.050 }' 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.050 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.311 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.311 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.311 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.311 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.311 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.311 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.311 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.311 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.311 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.571 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.571 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.571 "name": "raid_bdev1", 00:15:19.571 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:19.571 "strip_size_kb": 64, 00:15:19.571 "state": "online", 00:15:19.571 "raid_level": "raid5f", 00:15:19.571 "superblock": true, 00:15:19.571 "num_base_bdevs": 3, 00:15:19.571 "num_base_bdevs_discovered": 2, 00:15:19.571 "num_base_bdevs_operational": 2, 00:15:19.571 "base_bdevs_list": [ 00:15:19.571 { 00:15:19.571 "name": null, 00:15:19.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.571 "is_configured": false, 00:15:19.571 "data_offset": 0, 00:15:19.571 "data_size": 63488 00:15:19.571 }, 00:15:19.571 { 00:15:19.571 "name": "BaseBdev2", 00:15:19.571 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:19.571 "is_configured": true, 00:15:19.571 "data_offset": 2048, 00:15:19.571 "data_size": 63488 00:15:19.571 }, 00:15:19.571 { 00:15:19.571 "name": "BaseBdev3", 00:15:19.571 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:19.571 "is_configured": true, 00:15:19.571 "data_offset": 2048, 00:15:19.571 "data_size": 63488 00:15:19.571 } 00:15:19.571 ] 00:15:19.571 }' 00:15:19.571 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.571 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.571 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.571 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.571 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:19.571 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.571 23:49:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.572 [2024-12-06 23:49:30.988113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:19.572 [2024-12-06 23:49:31.003167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:19.572 23:49:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.572 23:49:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:19.572 [2024-12-06 23:49:31.010290] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:20.513 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.513 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.513 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.513 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.513 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.513 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.513 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.513 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.513 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.513 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.513 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.513 "name": "raid_bdev1", 00:15:20.513 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:20.513 "strip_size_kb": 64, 00:15:20.513 "state": "online", 00:15:20.513 "raid_level": "raid5f", 00:15:20.513 "superblock": true, 00:15:20.513 "num_base_bdevs": 3, 00:15:20.513 "num_base_bdevs_discovered": 3, 00:15:20.513 "num_base_bdevs_operational": 3, 00:15:20.513 "process": { 00:15:20.513 "type": "rebuild", 00:15:20.513 "target": "spare", 00:15:20.513 "progress": { 00:15:20.513 "blocks": 20480, 00:15:20.513 "percent": 16 00:15:20.513 } 00:15:20.513 }, 00:15:20.513 "base_bdevs_list": [ 00:15:20.513 { 00:15:20.513 "name": "spare", 00:15:20.513 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:20.513 "is_configured": true, 00:15:20.513 "data_offset": 2048, 00:15:20.513 "data_size": 63488 00:15:20.513 }, 00:15:20.513 { 00:15:20.513 "name": "BaseBdev2", 00:15:20.513 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:20.513 "is_configured": true, 00:15:20.513 "data_offset": 2048, 00:15:20.513 "data_size": 63488 00:15:20.513 }, 00:15:20.513 { 00:15:20.513 "name": "BaseBdev3", 00:15:20.513 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:20.513 "is_configured": true, 00:15:20.513 "data_offset": 2048, 00:15:20.513 "data_size": 63488 00:15:20.513 } 00:15:20.513 ] 00:15:20.513 }' 00:15:20.513 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:20.772 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=561 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.772 "name": "raid_bdev1", 00:15:20.772 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:20.772 "strip_size_kb": 64, 00:15:20.772 "state": "online", 00:15:20.772 "raid_level": "raid5f", 00:15:20.772 "superblock": true, 00:15:20.772 "num_base_bdevs": 3, 00:15:20.772 "num_base_bdevs_discovered": 3, 00:15:20.772 "num_base_bdevs_operational": 3, 00:15:20.772 "process": { 00:15:20.772 "type": "rebuild", 00:15:20.772 "target": "spare", 00:15:20.772 "progress": { 00:15:20.772 "blocks": 22528, 00:15:20.772 "percent": 17 00:15:20.772 } 00:15:20.772 }, 00:15:20.772 "base_bdevs_list": [ 00:15:20.772 { 00:15:20.772 "name": "spare", 00:15:20.772 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:20.772 "is_configured": true, 00:15:20.772 "data_offset": 2048, 00:15:20.772 "data_size": 63488 00:15:20.772 }, 00:15:20.772 { 00:15:20.772 "name": "BaseBdev2", 00:15:20.772 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:20.772 "is_configured": true, 00:15:20.772 "data_offset": 2048, 00:15:20.772 "data_size": 63488 00:15:20.772 }, 00:15:20.772 { 00:15:20.772 "name": "BaseBdev3", 00:15:20.772 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:20.772 "is_configured": true, 00:15:20.772 "data_offset": 2048, 00:15:20.772 "data_size": 63488 00:15:20.772 } 00:15:20.772 ] 00:15:20.772 }' 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.772 23:49:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.711 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.711 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.711 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.711 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.711 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.711 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.971 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.971 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.971 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.971 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.971 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.971 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.971 "name": "raid_bdev1", 00:15:21.971 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:21.971 "strip_size_kb": 64, 00:15:21.971 "state": "online", 00:15:21.971 "raid_level": "raid5f", 00:15:21.971 "superblock": true, 00:15:21.971 "num_base_bdevs": 3, 00:15:21.971 "num_base_bdevs_discovered": 3, 00:15:21.971 "num_base_bdevs_operational": 3, 00:15:21.971 "process": { 00:15:21.971 "type": "rebuild", 00:15:21.971 "target": "spare", 00:15:21.971 "progress": { 00:15:21.971 "blocks": 45056, 00:15:21.971 "percent": 35 00:15:21.971 } 00:15:21.971 }, 00:15:21.971 "base_bdevs_list": [ 00:15:21.971 { 00:15:21.971 "name": "spare", 00:15:21.971 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:21.971 "is_configured": true, 00:15:21.971 "data_offset": 2048, 00:15:21.971 "data_size": 63488 00:15:21.971 }, 00:15:21.971 { 00:15:21.971 "name": "BaseBdev2", 00:15:21.971 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:21.971 "is_configured": true, 00:15:21.971 "data_offset": 2048, 00:15:21.971 "data_size": 63488 00:15:21.971 }, 00:15:21.971 { 00:15:21.971 "name": "BaseBdev3", 00:15:21.971 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:21.971 "is_configured": true, 00:15:21.971 "data_offset": 2048, 00:15:21.971 "data_size": 63488 00:15:21.971 } 00:15:21.971 ] 00:15:21.971 }' 00:15:21.971 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.971 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.971 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.971 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.971 23:49:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:22.911 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.911 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.911 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.911 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.911 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.911 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.911 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.912 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.912 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.912 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.912 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.172 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.172 "name": "raid_bdev1", 00:15:23.172 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:23.172 "strip_size_kb": 64, 00:15:23.172 "state": "online", 00:15:23.172 "raid_level": "raid5f", 00:15:23.172 "superblock": true, 00:15:23.172 "num_base_bdevs": 3, 00:15:23.172 "num_base_bdevs_discovered": 3, 00:15:23.172 "num_base_bdevs_operational": 3, 00:15:23.172 "process": { 00:15:23.172 "type": "rebuild", 00:15:23.172 "target": "spare", 00:15:23.172 "progress": { 00:15:23.172 "blocks": 69632, 00:15:23.172 "percent": 54 00:15:23.172 } 00:15:23.172 }, 00:15:23.172 "base_bdevs_list": [ 00:15:23.172 { 00:15:23.172 "name": "spare", 00:15:23.172 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:23.172 "is_configured": true, 00:15:23.172 "data_offset": 2048, 00:15:23.172 "data_size": 63488 00:15:23.172 }, 00:15:23.172 { 00:15:23.172 "name": "BaseBdev2", 00:15:23.172 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:23.172 "is_configured": true, 00:15:23.172 "data_offset": 2048, 00:15:23.172 "data_size": 63488 00:15:23.172 }, 00:15:23.172 { 00:15:23.172 "name": "BaseBdev3", 00:15:23.172 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:23.172 "is_configured": true, 00:15:23.172 "data_offset": 2048, 00:15:23.172 "data_size": 63488 00:15:23.172 } 00:15:23.172 ] 00:15:23.172 }' 00:15:23.172 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.172 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.172 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.172 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.172 23:49:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.111 "name": "raid_bdev1", 00:15:24.111 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:24.111 "strip_size_kb": 64, 00:15:24.111 "state": "online", 00:15:24.111 "raid_level": "raid5f", 00:15:24.111 "superblock": true, 00:15:24.111 "num_base_bdevs": 3, 00:15:24.111 "num_base_bdevs_discovered": 3, 00:15:24.111 "num_base_bdevs_operational": 3, 00:15:24.111 "process": { 00:15:24.111 "type": "rebuild", 00:15:24.111 "target": "spare", 00:15:24.111 "progress": { 00:15:24.111 "blocks": 92160, 00:15:24.111 "percent": 72 00:15:24.111 } 00:15:24.111 }, 00:15:24.111 "base_bdevs_list": [ 00:15:24.111 { 00:15:24.111 "name": "spare", 00:15:24.111 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:24.111 "is_configured": true, 00:15:24.111 "data_offset": 2048, 00:15:24.111 "data_size": 63488 00:15:24.111 }, 00:15:24.111 { 00:15:24.111 "name": "BaseBdev2", 00:15:24.111 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:24.111 "is_configured": true, 00:15:24.111 "data_offset": 2048, 00:15:24.111 "data_size": 63488 00:15:24.111 }, 00:15:24.111 { 00:15:24.111 "name": "BaseBdev3", 00:15:24.111 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:24.111 "is_configured": true, 00:15:24.111 "data_offset": 2048, 00:15:24.111 "data_size": 63488 00:15:24.111 } 00:15:24.111 ] 00:15:24.111 }' 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.111 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.371 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.371 23:49:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.311 "name": "raid_bdev1", 00:15:25.311 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:25.311 "strip_size_kb": 64, 00:15:25.311 "state": "online", 00:15:25.311 "raid_level": "raid5f", 00:15:25.311 "superblock": true, 00:15:25.311 "num_base_bdevs": 3, 00:15:25.311 "num_base_bdevs_discovered": 3, 00:15:25.311 "num_base_bdevs_operational": 3, 00:15:25.311 "process": { 00:15:25.311 "type": "rebuild", 00:15:25.311 "target": "spare", 00:15:25.311 "progress": { 00:15:25.311 "blocks": 114688, 00:15:25.311 "percent": 90 00:15:25.311 } 00:15:25.311 }, 00:15:25.311 "base_bdevs_list": [ 00:15:25.311 { 00:15:25.311 "name": "spare", 00:15:25.311 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:25.311 "is_configured": true, 00:15:25.311 "data_offset": 2048, 00:15:25.311 "data_size": 63488 00:15:25.311 }, 00:15:25.311 { 00:15:25.311 "name": "BaseBdev2", 00:15:25.311 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:25.311 "is_configured": true, 00:15:25.311 "data_offset": 2048, 00:15:25.311 "data_size": 63488 00:15:25.311 }, 00:15:25.311 { 00:15:25.311 "name": "BaseBdev3", 00:15:25.311 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:25.311 "is_configured": true, 00:15:25.311 "data_offset": 2048, 00:15:25.311 "data_size": 63488 00:15:25.311 } 00:15:25.311 ] 00:15:25.311 }' 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.311 23:49:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.881 [2024-12-06 23:49:37.245542] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:25.881 [2024-12-06 23:49:37.245612] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:25.881 [2024-12-06 23:49:37.245739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.451 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.452 "name": "raid_bdev1", 00:15:26.452 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:26.452 "strip_size_kb": 64, 00:15:26.452 "state": "online", 00:15:26.452 "raid_level": "raid5f", 00:15:26.452 "superblock": true, 00:15:26.452 "num_base_bdevs": 3, 00:15:26.452 "num_base_bdevs_discovered": 3, 00:15:26.452 "num_base_bdevs_operational": 3, 00:15:26.452 "base_bdevs_list": [ 00:15:26.452 { 00:15:26.452 "name": "spare", 00:15:26.452 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:26.452 "is_configured": true, 00:15:26.452 "data_offset": 2048, 00:15:26.452 "data_size": 63488 00:15:26.452 }, 00:15:26.452 { 00:15:26.452 "name": "BaseBdev2", 00:15:26.452 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:26.452 "is_configured": true, 00:15:26.452 "data_offset": 2048, 00:15:26.452 "data_size": 63488 00:15:26.452 }, 00:15:26.452 { 00:15:26.452 "name": "BaseBdev3", 00:15:26.452 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:26.452 "is_configured": true, 00:15:26.452 "data_offset": 2048, 00:15:26.452 "data_size": 63488 00:15:26.452 } 00:15:26.452 ] 00:15:26.452 }' 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.452 23:49:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.452 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.452 "name": "raid_bdev1", 00:15:26.452 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:26.452 "strip_size_kb": 64, 00:15:26.452 "state": "online", 00:15:26.452 "raid_level": "raid5f", 00:15:26.452 "superblock": true, 00:15:26.452 "num_base_bdevs": 3, 00:15:26.452 "num_base_bdevs_discovered": 3, 00:15:26.452 "num_base_bdevs_operational": 3, 00:15:26.452 "base_bdevs_list": [ 00:15:26.452 { 00:15:26.452 "name": "spare", 00:15:26.452 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:26.452 "is_configured": true, 00:15:26.452 "data_offset": 2048, 00:15:26.452 "data_size": 63488 00:15:26.452 }, 00:15:26.452 { 00:15:26.452 "name": "BaseBdev2", 00:15:26.452 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:26.452 "is_configured": true, 00:15:26.452 "data_offset": 2048, 00:15:26.452 "data_size": 63488 00:15:26.452 }, 00:15:26.452 { 00:15:26.452 "name": "BaseBdev3", 00:15:26.452 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:26.452 "is_configured": true, 00:15:26.452 "data_offset": 2048, 00:15:26.452 "data_size": 63488 00:15:26.452 } 00:15:26.452 ] 00:15:26.452 }' 00:15:26.452 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.712 "name": "raid_bdev1", 00:15:26.712 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:26.712 "strip_size_kb": 64, 00:15:26.712 "state": "online", 00:15:26.712 "raid_level": "raid5f", 00:15:26.712 "superblock": true, 00:15:26.712 "num_base_bdevs": 3, 00:15:26.712 "num_base_bdevs_discovered": 3, 00:15:26.712 "num_base_bdevs_operational": 3, 00:15:26.712 "base_bdevs_list": [ 00:15:26.712 { 00:15:26.712 "name": "spare", 00:15:26.712 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:26.712 "is_configured": true, 00:15:26.712 "data_offset": 2048, 00:15:26.712 "data_size": 63488 00:15:26.712 }, 00:15:26.712 { 00:15:26.712 "name": "BaseBdev2", 00:15:26.712 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:26.712 "is_configured": true, 00:15:26.712 "data_offset": 2048, 00:15:26.712 "data_size": 63488 00:15:26.712 }, 00:15:26.712 { 00:15:26.712 "name": "BaseBdev3", 00:15:26.712 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:26.712 "is_configured": true, 00:15:26.712 "data_offset": 2048, 00:15:26.712 "data_size": 63488 00:15:26.712 } 00:15:26.712 ] 00:15:26.712 }' 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.712 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.972 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:26.972 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.972 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.972 [2024-12-06 23:49:38.525234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.972 [2024-12-06 23:49:38.525268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.972 [2024-12-06 23:49:38.525345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.972 [2024-12-06 23:49:38.525441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.972 [2024-12-06 23:49:38.525465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:26.972 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:27.233 /dev/nbd0 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:27.233 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.493 1+0 records in 00:15:27.493 1+0 records out 00:15:27.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419213 s, 9.8 MB/s 00:15:27.493 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.493 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:27.493 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.493 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:27.493 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:27.493 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.493 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.493 23:49:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:27.493 /dev/nbd1 00:15:27.493 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:27.493 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:27.493 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:27.493 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:27.493 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:27.493 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:27.493 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:27.493 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:27.493 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:27.493 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:27.493 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.753 1+0 records in 00:15:27.753 1+0 records out 00:15:27.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400303 s, 10.2 MB/s 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.753 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:28.014 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:28.014 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:28.014 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:28.014 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.014 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.014 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:28.014 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:28.014 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.014 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.014 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:28.274 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:28.274 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:28.274 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:28.274 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.274 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.274 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:28.274 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.275 [2024-12-06 23:49:39.686977] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:28.275 [2024-12-06 23:49:39.687056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.275 [2024-12-06 23:49:39.687077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:28.275 [2024-12-06 23:49:39.687088] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.275 [2024-12-06 23:49:39.689370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.275 [2024-12-06 23:49:39.689413] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:28.275 [2024-12-06 23:49:39.689498] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:28.275 [2024-12-06 23:49:39.689569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.275 [2024-12-06 23:49:39.689744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.275 [2024-12-06 23:49:39.689856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.275 spare 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.275 [2024-12-06 23:49:39.789758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:28.275 [2024-12-06 23:49:39.789791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:28.275 [2024-12-06 23:49:39.790035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:28.275 [2024-12-06 23:49:39.794889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:28.275 [2024-12-06 23:49:39.794914] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:28.275 [2024-12-06 23:49:39.795111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.275 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.534 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.534 "name": "raid_bdev1", 00:15:28.534 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:28.534 "strip_size_kb": 64, 00:15:28.534 "state": "online", 00:15:28.534 "raid_level": "raid5f", 00:15:28.534 "superblock": true, 00:15:28.534 "num_base_bdevs": 3, 00:15:28.534 "num_base_bdevs_discovered": 3, 00:15:28.534 "num_base_bdevs_operational": 3, 00:15:28.534 "base_bdevs_list": [ 00:15:28.534 { 00:15:28.534 "name": "spare", 00:15:28.534 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:28.534 "is_configured": true, 00:15:28.534 "data_offset": 2048, 00:15:28.534 "data_size": 63488 00:15:28.534 }, 00:15:28.534 { 00:15:28.534 "name": "BaseBdev2", 00:15:28.534 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:28.534 "is_configured": true, 00:15:28.534 "data_offset": 2048, 00:15:28.534 "data_size": 63488 00:15:28.534 }, 00:15:28.534 { 00:15:28.534 "name": "BaseBdev3", 00:15:28.534 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:28.534 "is_configured": true, 00:15:28.534 "data_offset": 2048, 00:15:28.534 "data_size": 63488 00:15:28.534 } 00:15:28.534 ] 00:15:28.534 }' 00:15:28.534 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.534 23:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.794 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.794 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.794 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:28.794 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:28.794 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.794 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.794 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.794 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.794 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.794 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.794 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.794 "name": "raid_bdev1", 00:15:28.794 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:28.794 "strip_size_kb": 64, 00:15:28.794 "state": "online", 00:15:28.794 "raid_level": "raid5f", 00:15:28.794 "superblock": true, 00:15:28.794 "num_base_bdevs": 3, 00:15:28.794 "num_base_bdevs_discovered": 3, 00:15:28.794 "num_base_bdevs_operational": 3, 00:15:28.794 "base_bdevs_list": [ 00:15:28.794 { 00:15:28.794 "name": "spare", 00:15:28.794 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:28.795 "is_configured": true, 00:15:28.795 "data_offset": 2048, 00:15:28.795 "data_size": 63488 00:15:28.795 }, 00:15:28.795 { 00:15:28.795 "name": "BaseBdev2", 00:15:28.795 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:28.795 "is_configured": true, 00:15:28.795 "data_offset": 2048, 00:15:28.795 "data_size": 63488 00:15:28.795 }, 00:15:28.795 { 00:15:28.795 "name": "BaseBdev3", 00:15:28.795 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:28.795 "is_configured": true, 00:15:28.795 "data_offset": 2048, 00:15:28.795 "data_size": 63488 00:15:28.795 } 00:15:28.795 ] 00:15:28.795 }' 00:15:28.795 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.795 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.795 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.795 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.795 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.795 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:28.795 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.795 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.055 [2024-12-06 23:49:40.400155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.055 "name": "raid_bdev1", 00:15:29.055 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:29.055 "strip_size_kb": 64, 00:15:29.055 "state": "online", 00:15:29.055 "raid_level": "raid5f", 00:15:29.055 "superblock": true, 00:15:29.055 "num_base_bdevs": 3, 00:15:29.055 "num_base_bdevs_discovered": 2, 00:15:29.055 "num_base_bdevs_operational": 2, 00:15:29.055 "base_bdevs_list": [ 00:15:29.055 { 00:15:29.055 "name": null, 00:15:29.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.055 "is_configured": false, 00:15:29.055 "data_offset": 0, 00:15:29.055 "data_size": 63488 00:15:29.055 }, 00:15:29.055 { 00:15:29.055 "name": "BaseBdev2", 00:15:29.055 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:29.055 "is_configured": true, 00:15:29.055 "data_offset": 2048, 00:15:29.055 "data_size": 63488 00:15:29.055 }, 00:15:29.055 { 00:15:29.055 "name": "BaseBdev3", 00:15:29.055 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:29.055 "is_configured": true, 00:15:29.055 "data_offset": 2048, 00:15:29.055 "data_size": 63488 00:15:29.055 } 00:15:29.055 ] 00:15:29.055 }' 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.055 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.315 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.315 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.315 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.315 [2024-12-06 23:49:40.859428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.315 [2024-12-06 23:49:40.859599] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:29.315 [2024-12-06 23:49:40.859623] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:29.315 [2024-12-06 23:49:40.859680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.315 [2024-12-06 23:49:40.874998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:29.315 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.315 23:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:29.588 [2024-12-06 23:49:40.882008] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.605 "name": "raid_bdev1", 00:15:30.605 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:30.605 "strip_size_kb": 64, 00:15:30.605 "state": "online", 00:15:30.605 "raid_level": "raid5f", 00:15:30.605 "superblock": true, 00:15:30.605 "num_base_bdevs": 3, 00:15:30.605 "num_base_bdevs_discovered": 3, 00:15:30.605 "num_base_bdevs_operational": 3, 00:15:30.605 "process": { 00:15:30.605 "type": "rebuild", 00:15:30.605 "target": "spare", 00:15:30.605 "progress": { 00:15:30.605 "blocks": 20480, 00:15:30.605 "percent": 16 00:15:30.605 } 00:15:30.605 }, 00:15:30.605 "base_bdevs_list": [ 00:15:30.605 { 00:15:30.605 "name": "spare", 00:15:30.605 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:30.605 "is_configured": true, 00:15:30.605 "data_offset": 2048, 00:15:30.605 "data_size": 63488 00:15:30.605 }, 00:15:30.605 { 00:15:30.605 "name": "BaseBdev2", 00:15:30.605 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:30.605 "is_configured": true, 00:15:30.605 "data_offset": 2048, 00:15:30.605 "data_size": 63488 00:15:30.605 }, 00:15:30.605 { 00:15:30.605 "name": "BaseBdev3", 00:15:30.605 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:30.605 "is_configured": true, 00:15:30.605 "data_offset": 2048, 00:15:30.605 "data_size": 63488 00:15:30.605 } 00:15:30.605 ] 00:15:30.605 }' 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.605 23:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.605 [2024-12-06 23:49:42.012767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.605 [2024-12-06 23:49:42.089457] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:30.605 [2024-12-06 23:49:42.089518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.605 [2024-12-06 23:49:42.089549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.605 [2024-12-06 23:49:42.089558] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.605 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.865 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.865 "name": "raid_bdev1", 00:15:30.865 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:30.865 "strip_size_kb": 64, 00:15:30.865 "state": "online", 00:15:30.865 "raid_level": "raid5f", 00:15:30.865 "superblock": true, 00:15:30.865 "num_base_bdevs": 3, 00:15:30.865 "num_base_bdevs_discovered": 2, 00:15:30.865 "num_base_bdevs_operational": 2, 00:15:30.865 "base_bdevs_list": [ 00:15:30.865 { 00:15:30.865 "name": null, 00:15:30.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.865 "is_configured": false, 00:15:30.865 "data_offset": 0, 00:15:30.865 "data_size": 63488 00:15:30.865 }, 00:15:30.865 { 00:15:30.865 "name": "BaseBdev2", 00:15:30.865 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:30.865 "is_configured": true, 00:15:30.865 "data_offset": 2048, 00:15:30.865 "data_size": 63488 00:15:30.865 }, 00:15:30.865 { 00:15:30.865 "name": "BaseBdev3", 00:15:30.865 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:30.865 "is_configured": true, 00:15:30.865 "data_offset": 2048, 00:15:30.865 "data_size": 63488 00:15:30.865 } 00:15:30.865 ] 00:15:30.865 }' 00:15:30.865 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.865 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.125 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:31.125 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.125 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.125 [2024-12-06 23:49:42.586541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:31.125 [2024-12-06 23:49:42.586619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.125 [2024-12-06 23:49:42.586638] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:31.125 [2024-12-06 23:49:42.586652] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.125 [2024-12-06 23:49:42.587137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.125 [2024-12-06 23:49:42.587168] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:31.125 [2024-12-06 23:49:42.587258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:31.125 [2024-12-06 23:49:42.587280] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:31.125 [2024-12-06 23:49:42.587290] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:31.125 [2024-12-06 23:49:42.587311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.125 [2024-12-06 23:49:42.603138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:31.125 spare 00:15:31.125 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.125 23:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:31.125 [2024-12-06 23:49:42.609969] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:32.064 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.064 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.064 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.064 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.064 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.064 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.064 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.064 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.064 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.325 "name": "raid_bdev1", 00:15:32.325 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:32.325 "strip_size_kb": 64, 00:15:32.325 "state": "online", 00:15:32.325 "raid_level": "raid5f", 00:15:32.325 "superblock": true, 00:15:32.325 "num_base_bdevs": 3, 00:15:32.325 "num_base_bdevs_discovered": 3, 00:15:32.325 "num_base_bdevs_operational": 3, 00:15:32.325 "process": { 00:15:32.325 "type": "rebuild", 00:15:32.325 "target": "spare", 00:15:32.325 "progress": { 00:15:32.325 "blocks": 20480, 00:15:32.325 "percent": 16 00:15:32.325 } 00:15:32.325 }, 00:15:32.325 "base_bdevs_list": [ 00:15:32.325 { 00:15:32.325 "name": "spare", 00:15:32.325 "uuid": "9645d09a-0cee-5c2e-a729-7410ab7159b4", 00:15:32.325 "is_configured": true, 00:15:32.325 "data_offset": 2048, 00:15:32.325 "data_size": 63488 00:15:32.325 }, 00:15:32.325 { 00:15:32.325 "name": "BaseBdev2", 00:15:32.325 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:32.325 "is_configured": true, 00:15:32.325 "data_offset": 2048, 00:15:32.325 "data_size": 63488 00:15:32.325 }, 00:15:32.325 { 00:15:32.325 "name": "BaseBdev3", 00:15:32.325 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:32.325 "is_configured": true, 00:15:32.325 "data_offset": 2048, 00:15:32.325 "data_size": 63488 00:15:32.325 } 00:15:32.325 ] 00:15:32.325 }' 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.325 [2024-12-06 23:49:43.761168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.325 [2024-12-06 23:49:43.817243] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:32.325 [2024-12-06 23:49:43.817314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.325 [2024-12-06 23:49:43.817331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.325 [2024-12-06 23:49:43.817338] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.325 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.585 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.585 "name": "raid_bdev1", 00:15:32.585 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:32.585 "strip_size_kb": 64, 00:15:32.585 "state": "online", 00:15:32.585 "raid_level": "raid5f", 00:15:32.585 "superblock": true, 00:15:32.585 "num_base_bdevs": 3, 00:15:32.585 "num_base_bdevs_discovered": 2, 00:15:32.585 "num_base_bdevs_operational": 2, 00:15:32.585 "base_bdevs_list": [ 00:15:32.585 { 00:15:32.585 "name": null, 00:15:32.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.585 "is_configured": false, 00:15:32.585 "data_offset": 0, 00:15:32.585 "data_size": 63488 00:15:32.585 }, 00:15:32.585 { 00:15:32.585 "name": "BaseBdev2", 00:15:32.585 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:32.585 "is_configured": true, 00:15:32.585 "data_offset": 2048, 00:15:32.585 "data_size": 63488 00:15:32.585 }, 00:15:32.585 { 00:15:32.585 "name": "BaseBdev3", 00:15:32.585 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:32.585 "is_configured": true, 00:15:32.585 "data_offset": 2048, 00:15:32.585 "data_size": 63488 00:15:32.585 } 00:15:32.585 ] 00:15:32.585 }' 00:15:32.586 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.586 23:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.846 "name": "raid_bdev1", 00:15:32.846 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:32.846 "strip_size_kb": 64, 00:15:32.846 "state": "online", 00:15:32.846 "raid_level": "raid5f", 00:15:32.846 "superblock": true, 00:15:32.846 "num_base_bdevs": 3, 00:15:32.846 "num_base_bdevs_discovered": 2, 00:15:32.846 "num_base_bdevs_operational": 2, 00:15:32.846 "base_bdevs_list": [ 00:15:32.846 { 00:15:32.846 "name": null, 00:15:32.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.846 "is_configured": false, 00:15:32.846 "data_offset": 0, 00:15:32.846 "data_size": 63488 00:15:32.846 }, 00:15:32.846 { 00:15:32.846 "name": "BaseBdev2", 00:15:32.846 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:32.846 "is_configured": true, 00:15:32.846 "data_offset": 2048, 00:15:32.846 "data_size": 63488 00:15:32.846 }, 00:15:32.846 { 00:15:32.846 "name": "BaseBdev3", 00:15:32.846 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:32.846 "is_configured": true, 00:15:32.846 "data_offset": 2048, 00:15:32.846 "data_size": 63488 00:15:32.846 } 00:15:32.846 ] 00:15:32.846 }' 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:32.846 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.106 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.106 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:33.106 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.106 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.106 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.106 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:33.106 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.106 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.106 [2024-12-06 23:49:44.430029] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:33.106 [2024-12-06 23:49:44.430079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.106 [2024-12-06 23:49:44.430101] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:33.106 [2024-12-06 23:49:44.430110] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.106 [2024-12-06 23:49:44.430558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.106 [2024-12-06 23:49:44.430585] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.106 [2024-12-06 23:49:44.430675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:33.106 [2024-12-06 23:49:44.430690] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:33.106 [2024-12-06 23:49:44.430710] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:33.106 [2024-12-06 23:49:44.430720] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:33.106 BaseBdev1 00:15:33.106 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.106 23:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.045 "name": "raid_bdev1", 00:15:34.045 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:34.045 "strip_size_kb": 64, 00:15:34.045 "state": "online", 00:15:34.045 "raid_level": "raid5f", 00:15:34.045 "superblock": true, 00:15:34.045 "num_base_bdevs": 3, 00:15:34.045 "num_base_bdevs_discovered": 2, 00:15:34.045 "num_base_bdevs_operational": 2, 00:15:34.045 "base_bdevs_list": [ 00:15:34.045 { 00:15:34.045 "name": null, 00:15:34.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.045 "is_configured": false, 00:15:34.045 "data_offset": 0, 00:15:34.045 "data_size": 63488 00:15:34.045 }, 00:15:34.045 { 00:15:34.045 "name": "BaseBdev2", 00:15:34.045 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:34.045 "is_configured": true, 00:15:34.045 "data_offset": 2048, 00:15:34.045 "data_size": 63488 00:15:34.045 }, 00:15:34.045 { 00:15:34.045 "name": "BaseBdev3", 00:15:34.045 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:34.045 "is_configured": true, 00:15:34.045 "data_offset": 2048, 00:15:34.045 "data_size": 63488 00:15:34.045 } 00:15:34.045 ] 00:15:34.045 }' 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.045 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.615 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.615 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.615 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.615 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.615 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.615 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.615 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.615 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.615 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.615 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.615 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.615 "name": "raid_bdev1", 00:15:34.615 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:34.615 "strip_size_kb": 64, 00:15:34.615 "state": "online", 00:15:34.615 "raid_level": "raid5f", 00:15:34.615 "superblock": true, 00:15:34.615 "num_base_bdevs": 3, 00:15:34.615 "num_base_bdevs_discovered": 2, 00:15:34.615 "num_base_bdevs_operational": 2, 00:15:34.615 "base_bdevs_list": [ 00:15:34.615 { 00:15:34.615 "name": null, 00:15:34.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.615 "is_configured": false, 00:15:34.615 "data_offset": 0, 00:15:34.615 "data_size": 63488 00:15:34.615 }, 00:15:34.615 { 00:15:34.615 "name": "BaseBdev2", 00:15:34.615 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:34.615 "is_configured": true, 00:15:34.615 "data_offset": 2048, 00:15:34.615 "data_size": 63488 00:15:34.615 }, 00:15:34.615 { 00:15:34.615 "name": "BaseBdev3", 00:15:34.615 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:34.616 "is_configured": true, 00:15:34.616 "data_offset": 2048, 00:15:34.616 "data_size": 63488 00:15:34.616 } 00:15:34.616 ] 00:15:34.616 }' 00:15:34.616 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.616 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.616 23:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.616 [2024-12-06 23:49:46.043328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.616 [2024-12-06 23:49:46.043483] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:34.616 [2024-12-06 23:49:46.043504] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:34.616 request: 00:15:34.616 { 00:15:34.616 "base_bdev": "BaseBdev1", 00:15:34.616 "raid_bdev": "raid_bdev1", 00:15:34.616 "method": "bdev_raid_add_base_bdev", 00:15:34.616 "req_id": 1 00:15:34.616 } 00:15:34.616 Got JSON-RPC error response 00:15:34.616 response: 00:15:34.616 { 00:15:34.616 "code": -22, 00:15:34.616 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:34.616 } 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:34.616 23:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.558 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.558 "name": "raid_bdev1", 00:15:35.558 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:35.558 "strip_size_kb": 64, 00:15:35.558 "state": "online", 00:15:35.558 "raid_level": "raid5f", 00:15:35.558 "superblock": true, 00:15:35.559 "num_base_bdevs": 3, 00:15:35.559 "num_base_bdevs_discovered": 2, 00:15:35.559 "num_base_bdevs_operational": 2, 00:15:35.559 "base_bdevs_list": [ 00:15:35.559 { 00:15:35.559 "name": null, 00:15:35.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.559 "is_configured": false, 00:15:35.559 "data_offset": 0, 00:15:35.559 "data_size": 63488 00:15:35.559 }, 00:15:35.559 { 00:15:35.559 "name": "BaseBdev2", 00:15:35.559 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:35.559 "is_configured": true, 00:15:35.559 "data_offset": 2048, 00:15:35.559 "data_size": 63488 00:15:35.559 }, 00:15:35.559 { 00:15:35.559 "name": "BaseBdev3", 00:15:35.559 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:35.559 "is_configured": true, 00:15:35.559 "data_offset": 2048, 00:15:35.559 "data_size": 63488 00:15:35.559 } 00:15:35.559 ] 00:15:35.559 }' 00:15:35.559 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.559 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.129 "name": "raid_bdev1", 00:15:36.129 "uuid": "a10f5e67-de69-4202-825b-4d9aaa126509", 00:15:36.129 "strip_size_kb": 64, 00:15:36.129 "state": "online", 00:15:36.129 "raid_level": "raid5f", 00:15:36.129 "superblock": true, 00:15:36.129 "num_base_bdevs": 3, 00:15:36.129 "num_base_bdevs_discovered": 2, 00:15:36.129 "num_base_bdevs_operational": 2, 00:15:36.129 "base_bdevs_list": [ 00:15:36.129 { 00:15:36.129 "name": null, 00:15:36.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.129 "is_configured": false, 00:15:36.129 "data_offset": 0, 00:15:36.129 "data_size": 63488 00:15:36.129 }, 00:15:36.129 { 00:15:36.129 "name": "BaseBdev2", 00:15:36.129 "uuid": "ced2abfb-2581-5992-8a31-8bf3f3e597b0", 00:15:36.129 "is_configured": true, 00:15:36.129 "data_offset": 2048, 00:15:36.129 "data_size": 63488 00:15:36.129 }, 00:15:36.129 { 00:15:36.129 "name": "BaseBdev3", 00:15:36.129 "uuid": "08394394-53e1-5d1f-b3a8-5c7e9fc62ad8", 00:15:36.129 "is_configured": true, 00:15:36.129 "data_offset": 2048, 00:15:36.129 "data_size": 63488 00:15:36.129 } 00:15:36.129 ] 00:15:36.129 }' 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81926 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81926 ']' 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81926 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81926 00:15:36.129 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.130 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.130 killing process with pid 81926 00:15:36.130 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81926' 00:15:36.130 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81926 00:15:36.130 Received shutdown signal, test time was about 60.000000 seconds 00:15:36.130 00:15:36.130 Latency(us) 00:15:36.130 [2024-12-06T23:49:47.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.130 [2024-12-06T23:49:47.693Z] =================================================================================================================== 00:15:36.130 [2024-12-06T23:49:47.693Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:36.130 [2024-12-06 23:49:47.673538] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.130 [2024-12-06 23:49:47.673652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.130 [2024-12-06 23:49:47.673736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.130 [2024-12-06 23:49:47.673749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:36.130 23:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81926 00:15:36.698 [2024-12-06 23:49:48.049091] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:37.637 23:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:37.637 00:15:37.637 real 0m22.935s 00:15:37.637 user 0m29.214s 00:15:37.637 sys 0m2.796s 00:15:37.638 23:49:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.638 23:49:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.638 ************************************ 00:15:37.638 END TEST raid5f_rebuild_test_sb 00:15:37.638 ************************************ 00:15:37.638 23:49:49 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:37.638 23:49:49 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:37.638 23:49:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:37.638 23:49:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.638 23:49:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:37.638 ************************************ 00:15:37.638 START TEST raid5f_state_function_test 00:15:37.638 ************************************ 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82675 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:37.638 Process raid pid: 82675 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82675' 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82675 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82675 ']' 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.638 23:49:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.899 [2024-12-06 23:49:49.282205] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:15:37.899 [2024-12-06 23:49:49.282334] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.899 [2024-12-06 23:49:49.458698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.159 [2024-12-06 23:49:49.568045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.420 [2024-12-06 23:49:49.761355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.420 [2024-12-06 23:49:49.761392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.679 [2024-12-06 23:49:50.097374] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.679 [2024-12-06 23:49:50.097432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.679 [2024-12-06 23:49:50.097443] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.679 [2024-12-06 23:49:50.097452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.679 [2024-12-06 23:49:50.097458] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:38.679 [2024-12-06 23:49:50.097467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:38.679 [2024-12-06 23:49:50.097473] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:38.679 [2024-12-06 23:49:50.097481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.679 "name": "Existed_Raid", 00:15:38.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.679 "strip_size_kb": 64, 00:15:38.679 "state": "configuring", 00:15:38.679 "raid_level": "raid5f", 00:15:38.679 "superblock": false, 00:15:38.679 "num_base_bdevs": 4, 00:15:38.679 "num_base_bdevs_discovered": 0, 00:15:38.679 "num_base_bdevs_operational": 4, 00:15:38.679 "base_bdevs_list": [ 00:15:38.679 { 00:15:38.679 "name": "BaseBdev1", 00:15:38.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.679 "is_configured": false, 00:15:38.679 "data_offset": 0, 00:15:38.679 "data_size": 0 00:15:38.679 }, 00:15:38.679 { 00:15:38.679 "name": "BaseBdev2", 00:15:38.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.679 "is_configured": false, 00:15:38.679 "data_offset": 0, 00:15:38.679 "data_size": 0 00:15:38.679 }, 00:15:38.679 { 00:15:38.679 "name": "BaseBdev3", 00:15:38.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.679 "is_configured": false, 00:15:38.679 "data_offset": 0, 00:15:38.679 "data_size": 0 00:15:38.679 }, 00:15:38.679 { 00:15:38.679 "name": "BaseBdev4", 00:15:38.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.679 "is_configured": false, 00:15:38.679 "data_offset": 0, 00:15:38.679 "data_size": 0 00:15:38.679 } 00:15:38.679 ] 00:15:38.679 }' 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.679 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 [2024-12-06 23:49:50.572487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:39.247 [2024-12-06 23:49:50.572528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 [2024-12-06 23:49:50.580491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.247 [2024-12-06 23:49:50.580534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.247 [2024-12-06 23:49:50.580543] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.247 [2024-12-06 23:49:50.580551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.247 [2024-12-06 23:49:50.580557] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.247 [2024-12-06 23:49:50.580566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.247 [2024-12-06 23:49:50.580572] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:39.247 [2024-12-06 23:49:50.580580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 [2024-12-06 23:49:50.622238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.247 BaseBdev1 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 [ 00:15:39.247 { 00:15:39.247 "name": "BaseBdev1", 00:15:39.247 "aliases": [ 00:15:39.247 "621a3008-c206-4da3-b42a-b10a83abca44" 00:15:39.247 ], 00:15:39.247 "product_name": "Malloc disk", 00:15:39.247 "block_size": 512, 00:15:39.247 "num_blocks": 65536, 00:15:39.247 "uuid": "621a3008-c206-4da3-b42a-b10a83abca44", 00:15:39.247 "assigned_rate_limits": { 00:15:39.247 "rw_ios_per_sec": 0, 00:15:39.247 "rw_mbytes_per_sec": 0, 00:15:39.247 "r_mbytes_per_sec": 0, 00:15:39.247 "w_mbytes_per_sec": 0 00:15:39.247 }, 00:15:39.247 "claimed": true, 00:15:39.247 "claim_type": "exclusive_write", 00:15:39.247 "zoned": false, 00:15:39.247 "supported_io_types": { 00:15:39.247 "read": true, 00:15:39.247 "write": true, 00:15:39.247 "unmap": true, 00:15:39.247 "flush": true, 00:15:39.247 "reset": true, 00:15:39.247 "nvme_admin": false, 00:15:39.247 "nvme_io": false, 00:15:39.247 "nvme_io_md": false, 00:15:39.247 "write_zeroes": true, 00:15:39.247 "zcopy": true, 00:15:39.247 "get_zone_info": false, 00:15:39.247 "zone_management": false, 00:15:39.247 "zone_append": false, 00:15:39.247 "compare": false, 00:15:39.247 "compare_and_write": false, 00:15:39.247 "abort": true, 00:15:39.247 "seek_hole": false, 00:15:39.247 "seek_data": false, 00:15:39.247 "copy": true, 00:15:39.247 "nvme_iov_md": false 00:15:39.247 }, 00:15:39.247 "memory_domains": [ 00:15:39.247 { 00:15:39.247 "dma_device_id": "system", 00:15:39.247 "dma_device_type": 1 00:15:39.247 }, 00:15:39.247 { 00:15:39.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.247 "dma_device_type": 2 00:15:39.247 } 00:15:39.247 ], 00:15:39.247 "driver_specific": {} 00:15:39.247 } 00:15:39.247 ] 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.247 "name": "Existed_Raid", 00:15:39.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.247 "strip_size_kb": 64, 00:15:39.247 "state": "configuring", 00:15:39.247 "raid_level": "raid5f", 00:15:39.247 "superblock": false, 00:15:39.247 "num_base_bdevs": 4, 00:15:39.247 "num_base_bdevs_discovered": 1, 00:15:39.247 "num_base_bdevs_operational": 4, 00:15:39.247 "base_bdevs_list": [ 00:15:39.247 { 00:15:39.247 "name": "BaseBdev1", 00:15:39.247 "uuid": "621a3008-c206-4da3-b42a-b10a83abca44", 00:15:39.247 "is_configured": true, 00:15:39.247 "data_offset": 0, 00:15:39.247 "data_size": 65536 00:15:39.247 }, 00:15:39.247 { 00:15:39.247 "name": "BaseBdev2", 00:15:39.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.247 "is_configured": false, 00:15:39.247 "data_offset": 0, 00:15:39.247 "data_size": 0 00:15:39.247 }, 00:15:39.247 { 00:15:39.247 "name": "BaseBdev3", 00:15:39.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.247 "is_configured": false, 00:15:39.247 "data_offset": 0, 00:15:39.247 "data_size": 0 00:15:39.247 }, 00:15:39.247 { 00:15:39.247 "name": "BaseBdev4", 00:15:39.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.247 "is_configured": false, 00:15:39.247 "data_offset": 0, 00:15:39.247 "data_size": 0 00:15:39.247 } 00:15:39.247 ] 00:15:39.247 }' 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.247 23:49:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.817 [2024-12-06 23:49:51.093435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:39.817 [2024-12-06 23:49:51.093477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.817 [2024-12-06 23:49:51.101480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.817 [2024-12-06 23:49:51.103250] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.817 [2024-12-06 23:49:51.103293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.817 [2024-12-06 23:49:51.103302] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.817 [2024-12-06 23:49:51.103312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.817 [2024-12-06 23:49:51.103318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:39.817 [2024-12-06 23:49:51.103325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.817 "name": "Existed_Raid", 00:15:39.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.817 "strip_size_kb": 64, 00:15:39.817 "state": "configuring", 00:15:39.817 "raid_level": "raid5f", 00:15:39.817 "superblock": false, 00:15:39.817 "num_base_bdevs": 4, 00:15:39.817 "num_base_bdevs_discovered": 1, 00:15:39.817 "num_base_bdevs_operational": 4, 00:15:39.817 "base_bdevs_list": [ 00:15:39.817 { 00:15:39.817 "name": "BaseBdev1", 00:15:39.817 "uuid": "621a3008-c206-4da3-b42a-b10a83abca44", 00:15:39.817 "is_configured": true, 00:15:39.817 "data_offset": 0, 00:15:39.817 "data_size": 65536 00:15:39.817 }, 00:15:39.817 { 00:15:39.817 "name": "BaseBdev2", 00:15:39.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.817 "is_configured": false, 00:15:39.817 "data_offset": 0, 00:15:39.817 "data_size": 0 00:15:39.817 }, 00:15:39.817 { 00:15:39.817 "name": "BaseBdev3", 00:15:39.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.817 "is_configured": false, 00:15:39.817 "data_offset": 0, 00:15:39.817 "data_size": 0 00:15:39.817 }, 00:15:39.817 { 00:15:39.817 "name": "BaseBdev4", 00:15:39.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.817 "is_configured": false, 00:15:39.817 "data_offset": 0, 00:15:39.817 "data_size": 0 00:15:39.817 } 00:15:39.817 ] 00:15:39.817 }' 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.817 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.077 [2024-12-06 23:49:51.566306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.077 BaseBdev2 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.077 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.077 [ 00:15:40.077 { 00:15:40.077 "name": "BaseBdev2", 00:15:40.077 "aliases": [ 00:15:40.077 "5d31d15d-70e2-42b0-9c1f-5f3603bd0735" 00:15:40.077 ], 00:15:40.077 "product_name": "Malloc disk", 00:15:40.077 "block_size": 512, 00:15:40.077 "num_blocks": 65536, 00:15:40.077 "uuid": "5d31d15d-70e2-42b0-9c1f-5f3603bd0735", 00:15:40.077 "assigned_rate_limits": { 00:15:40.077 "rw_ios_per_sec": 0, 00:15:40.077 "rw_mbytes_per_sec": 0, 00:15:40.077 "r_mbytes_per_sec": 0, 00:15:40.077 "w_mbytes_per_sec": 0 00:15:40.077 }, 00:15:40.077 "claimed": true, 00:15:40.077 "claim_type": "exclusive_write", 00:15:40.077 "zoned": false, 00:15:40.077 "supported_io_types": { 00:15:40.077 "read": true, 00:15:40.077 "write": true, 00:15:40.077 "unmap": true, 00:15:40.077 "flush": true, 00:15:40.077 "reset": true, 00:15:40.077 "nvme_admin": false, 00:15:40.077 "nvme_io": false, 00:15:40.077 "nvme_io_md": false, 00:15:40.077 "write_zeroes": true, 00:15:40.077 "zcopy": true, 00:15:40.077 "get_zone_info": false, 00:15:40.077 "zone_management": false, 00:15:40.077 "zone_append": false, 00:15:40.077 "compare": false, 00:15:40.077 "compare_and_write": false, 00:15:40.077 "abort": true, 00:15:40.077 "seek_hole": false, 00:15:40.077 "seek_data": false, 00:15:40.078 "copy": true, 00:15:40.078 "nvme_iov_md": false 00:15:40.078 }, 00:15:40.078 "memory_domains": [ 00:15:40.078 { 00:15:40.078 "dma_device_id": "system", 00:15:40.078 "dma_device_type": 1 00:15:40.078 }, 00:15:40.078 { 00:15:40.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.078 "dma_device_type": 2 00:15:40.078 } 00:15:40.078 ], 00:15:40.078 "driver_specific": {} 00:15:40.078 } 00:15:40.078 ] 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.078 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.338 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.338 "name": "Existed_Raid", 00:15:40.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.338 "strip_size_kb": 64, 00:15:40.338 "state": "configuring", 00:15:40.338 "raid_level": "raid5f", 00:15:40.338 "superblock": false, 00:15:40.338 "num_base_bdevs": 4, 00:15:40.338 "num_base_bdevs_discovered": 2, 00:15:40.338 "num_base_bdevs_operational": 4, 00:15:40.338 "base_bdevs_list": [ 00:15:40.338 { 00:15:40.338 "name": "BaseBdev1", 00:15:40.338 "uuid": "621a3008-c206-4da3-b42a-b10a83abca44", 00:15:40.338 "is_configured": true, 00:15:40.338 "data_offset": 0, 00:15:40.338 "data_size": 65536 00:15:40.338 }, 00:15:40.338 { 00:15:40.338 "name": "BaseBdev2", 00:15:40.338 "uuid": "5d31d15d-70e2-42b0-9c1f-5f3603bd0735", 00:15:40.338 "is_configured": true, 00:15:40.338 "data_offset": 0, 00:15:40.338 "data_size": 65536 00:15:40.338 }, 00:15:40.338 { 00:15:40.338 "name": "BaseBdev3", 00:15:40.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.338 "is_configured": false, 00:15:40.338 "data_offset": 0, 00:15:40.338 "data_size": 0 00:15:40.338 }, 00:15:40.338 { 00:15:40.338 "name": "BaseBdev4", 00:15:40.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.338 "is_configured": false, 00:15:40.338 "data_offset": 0, 00:15:40.338 "data_size": 0 00:15:40.338 } 00:15:40.338 ] 00:15:40.338 }' 00:15:40.338 23:49:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.338 23:49:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.598 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:40.598 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.598 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.598 [2024-12-06 23:49:52.127805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.598 BaseBdev3 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.599 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.599 [ 00:15:40.599 { 00:15:40.599 "name": "BaseBdev3", 00:15:40.599 "aliases": [ 00:15:40.599 "c1aa99d7-d30a-4efc-840c-d72ce39f3586" 00:15:40.599 ], 00:15:40.599 "product_name": "Malloc disk", 00:15:40.599 "block_size": 512, 00:15:40.599 "num_blocks": 65536, 00:15:40.599 "uuid": "c1aa99d7-d30a-4efc-840c-d72ce39f3586", 00:15:40.599 "assigned_rate_limits": { 00:15:40.599 "rw_ios_per_sec": 0, 00:15:40.599 "rw_mbytes_per_sec": 0, 00:15:40.599 "r_mbytes_per_sec": 0, 00:15:40.599 "w_mbytes_per_sec": 0 00:15:40.599 }, 00:15:40.599 "claimed": true, 00:15:40.599 "claim_type": "exclusive_write", 00:15:40.599 "zoned": false, 00:15:40.599 "supported_io_types": { 00:15:40.599 "read": true, 00:15:40.599 "write": true, 00:15:40.599 "unmap": true, 00:15:40.599 "flush": true, 00:15:40.599 "reset": true, 00:15:40.599 "nvme_admin": false, 00:15:40.599 "nvme_io": false, 00:15:40.599 "nvme_io_md": false, 00:15:40.599 "write_zeroes": true, 00:15:40.599 "zcopy": true, 00:15:40.599 "get_zone_info": false, 00:15:40.599 "zone_management": false, 00:15:40.599 "zone_append": false, 00:15:40.599 "compare": false, 00:15:40.599 "compare_and_write": false, 00:15:40.859 "abort": true, 00:15:40.859 "seek_hole": false, 00:15:40.859 "seek_data": false, 00:15:40.859 "copy": true, 00:15:40.859 "nvme_iov_md": false 00:15:40.859 }, 00:15:40.859 "memory_domains": [ 00:15:40.859 { 00:15:40.859 "dma_device_id": "system", 00:15:40.859 "dma_device_type": 1 00:15:40.859 }, 00:15:40.859 { 00:15:40.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.859 "dma_device_type": 2 00:15:40.859 } 00:15:40.859 ], 00:15:40.859 "driver_specific": {} 00:15:40.859 } 00:15:40.859 ] 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.859 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.860 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.860 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.860 "name": "Existed_Raid", 00:15:40.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.860 "strip_size_kb": 64, 00:15:40.860 "state": "configuring", 00:15:40.860 "raid_level": "raid5f", 00:15:40.860 "superblock": false, 00:15:40.860 "num_base_bdevs": 4, 00:15:40.860 "num_base_bdevs_discovered": 3, 00:15:40.860 "num_base_bdevs_operational": 4, 00:15:40.860 "base_bdevs_list": [ 00:15:40.860 { 00:15:40.860 "name": "BaseBdev1", 00:15:40.860 "uuid": "621a3008-c206-4da3-b42a-b10a83abca44", 00:15:40.860 "is_configured": true, 00:15:40.860 "data_offset": 0, 00:15:40.860 "data_size": 65536 00:15:40.860 }, 00:15:40.860 { 00:15:40.860 "name": "BaseBdev2", 00:15:40.860 "uuid": "5d31d15d-70e2-42b0-9c1f-5f3603bd0735", 00:15:40.860 "is_configured": true, 00:15:40.860 "data_offset": 0, 00:15:40.860 "data_size": 65536 00:15:40.860 }, 00:15:40.860 { 00:15:40.860 "name": "BaseBdev3", 00:15:40.860 "uuid": "c1aa99d7-d30a-4efc-840c-d72ce39f3586", 00:15:40.860 "is_configured": true, 00:15:40.860 "data_offset": 0, 00:15:40.860 "data_size": 65536 00:15:40.860 }, 00:15:40.860 { 00:15:40.860 "name": "BaseBdev4", 00:15:40.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.860 "is_configured": false, 00:15:40.860 "data_offset": 0, 00:15:40.860 "data_size": 0 00:15:40.860 } 00:15:40.860 ] 00:15:40.860 }' 00:15:40.860 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.860 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.120 [2024-12-06 23:49:52.653418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:41.120 [2024-12-06 23:49:52.653491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:41.120 [2024-12-06 23:49:52.653501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:41.120 [2024-12-06 23:49:52.653795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:41.120 [2024-12-06 23:49:52.660975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:41.120 [2024-12-06 23:49:52.661001] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:41.120 [2024-12-06 23:49:52.661270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.120 BaseBdev4 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.120 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.380 [ 00:15:41.380 { 00:15:41.380 "name": "BaseBdev4", 00:15:41.380 "aliases": [ 00:15:41.380 "774a103d-91fd-488a-aad3-1b508828e4ba" 00:15:41.380 ], 00:15:41.380 "product_name": "Malloc disk", 00:15:41.380 "block_size": 512, 00:15:41.380 "num_blocks": 65536, 00:15:41.380 "uuid": "774a103d-91fd-488a-aad3-1b508828e4ba", 00:15:41.380 "assigned_rate_limits": { 00:15:41.380 "rw_ios_per_sec": 0, 00:15:41.380 "rw_mbytes_per_sec": 0, 00:15:41.380 "r_mbytes_per_sec": 0, 00:15:41.380 "w_mbytes_per_sec": 0 00:15:41.380 }, 00:15:41.380 "claimed": true, 00:15:41.380 "claim_type": "exclusive_write", 00:15:41.380 "zoned": false, 00:15:41.380 "supported_io_types": { 00:15:41.380 "read": true, 00:15:41.380 "write": true, 00:15:41.380 "unmap": true, 00:15:41.380 "flush": true, 00:15:41.380 "reset": true, 00:15:41.380 "nvme_admin": false, 00:15:41.380 "nvme_io": false, 00:15:41.380 "nvme_io_md": false, 00:15:41.380 "write_zeroes": true, 00:15:41.380 "zcopy": true, 00:15:41.380 "get_zone_info": false, 00:15:41.380 "zone_management": false, 00:15:41.380 "zone_append": false, 00:15:41.380 "compare": false, 00:15:41.380 "compare_and_write": false, 00:15:41.380 "abort": true, 00:15:41.380 "seek_hole": false, 00:15:41.380 "seek_data": false, 00:15:41.380 "copy": true, 00:15:41.380 "nvme_iov_md": false 00:15:41.380 }, 00:15:41.380 "memory_domains": [ 00:15:41.380 { 00:15:41.380 "dma_device_id": "system", 00:15:41.380 "dma_device_type": 1 00:15:41.380 }, 00:15:41.380 { 00:15:41.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.380 "dma_device_type": 2 00:15:41.380 } 00:15:41.380 ], 00:15:41.380 "driver_specific": {} 00:15:41.380 } 00:15:41.380 ] 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.380 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.381 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.381 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.381 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.381 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.381 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.381 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.381 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.381 "name": "Existed_Raid", 00:15:41.381 "uuid": "7165094a-da40-4542-a562-c2340c609ddd", 00:15:41.381 "strip_size_kb": 64, 00:15:41.381 "state": "online", 00:15:41.381 "raid_level": "raid5f", 00:15:41.381 "superblock": false, 00:15:41.381 "num_base_bdevs": 4, 00:15:41.381 "num_base_bdevs_discovered": 4, 00:15:41.381 "num_base_bdevs_operational": 4, 00:15:41.381 "base_bdevs_list": [ 00:15:41.381 { 00:15:41.381 "name": "BaseBdev1", 00:15:41.381 "uuid": "621a3008-c206-4da3-b42a-b10a83abca44", 00:15:41.381 "is_configured": true, 00:15:41.381 "data_offset": 0, 00:15:41.381 "data_size": 65536 00:15:41.381 }, 00:15:41.381 { 00:15:41.381 "name": "BaseBdev2", 00:15:41.381 "uuid": "5d31d15d-70e2-42b0-9c1f-5f3603bd0735", 00:15:41.381 "is_configured": true, 00:15:41.381 "data_offset": 0, 00:15:41.381 "data_size": 65536 00:15:41.381 }, 00:15:41.381 { 00:15:41.381 "name": "BaseBdev3", 00:15:41.381 "uuid": "c1aa99d7-d30a-4efc-840c-d72ce39f3586", 00:15:41.381 "is_configured": true, 00:15:41.381 "data_offset": 0, 00:15:41.381 "data_size": 65536 00:15:41.381 }, 00:15:41.381 { 00:15:41.381 "name": "BaseBdev4", 00:15:41.381 "uuid": "774a103d-91fd-488a-aad3-1b508828e4ba", 00:15:41.381 "is_configured": true, 00:15:41.381 "data_offset": 0, 00:15:41.381 "data_size": 65536 00:15:41.381 } 00:15:41.381 ] 00:15:41.381 }' 00:15:41.381 23:49:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.381 23:49:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.642 [2024-12-06 23:49:53.156572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:41.642 "name": "Existed_Raid", 00:15:41.642 "aliases": [ 00:15:41.642 "7165094a-da40-4542-a562-c2340c609ddd" 00:15:41.642 ], 00:15:41.642 "product_name": "Raid Volume", 00:15:41.642 "block_size": 512, 00:15:41.642 "num_blocks": 196608, 00:15:41.642 "uuid": "7165094a-da40-4542-a562-c2340c609ddd", 00:15:41.642 "assigned_rate_limits": { 00:15:41.642 "rw_ios_per_sec": 0, 00:15:41.642 "rw_mbytes_per_sec": 0, 00:15:41.642 "r_mbytes_per_sec": 0, 00:15:41.642 "w_mbytes_per_sec": 0 00:15:41.642 }, 00:15:41.642 "claimed": false, 00:15:41.642 "zoned": false, 00:15:41.642 "supported_io_types": { 00:15:41.642 "read": true, 00:15:41.642 "write": true, 00:15:41.642 "unmap": false, 00:15:41.642 "flush": false, 00:15:41.642 "reset": true, 00:15:41.642 "nvme_admin": false, 00:15:41.642 "nvme_io": false, 00:15:41.642 "nvme_io_md": false, 00:15:41.642 "write_zeroes": true, 00:15:41.642 "zcopy": false, 00:15:41.642 "get_zone_info": false, 00:15:41.642 "zone_management": false, 00:15:41.642 "zone_append": false, 00:15:41.642 "compare": false, 00:15:41.642 "compare_and_write": false, 00:15:41.642 "abort": false, 00:15:41.642 "seek_hole": false, 00:15:41.642 "seek_data": false, 00:15:41.642 "copy": false, 00:15:41.642 "nvme_iov_md": false 00:15:41.642 }, 00:15:41.642 "driver_specific": { 00:15:41.642 "raid": { 00:15:41.642 "uuid": "7165094a-da40-4542-a562-c2340c609ddd", 00:15:41.642 "strip_size_kb": 64, 00:15:41.642 "state": "online", 00:15:41.642 "raid_level": "raid5f", 00:15:41.642 "superblock": false, 00:15:41.642 "num_base_bdevs": 4, 00:15:41.642 "num_base_bdevs_discovered": 4, 00:15:41.642 "num_base_bdevs_operational": 4, 00:15:41.642 "base_bdevs_list": [ 00:15:41.642 { 00:15:41.642 "name": "BaseBdev1", 00:15:41.642 "uuid": "621a3008-c206-4da3-b42a-b10a83abca44", 00:15:41.642 "is_configured": true, 00:15:41.642 "data_offset": 0, 00:15:41.642 "data_size": 65536 00:15:41.642 }, 00:15:41.642 { 00:15:41.642 "name": "BaseBdev2", 00:15:41.642 "uuid": "5d31d15d-70e2-42b0-9c1f-5f3603bd0735", 00:15:41.642 "is_configured": true, 00:15:41.642 "data_offset": 0, 00:15:41.642 "data_size": 65536 00:15:41.642 }, 00:15:41.642 { 00:15:41.642 "name": "BaseBdev3", 00:15:41.642 "uuid": "c1aa99d7-d30a-4efc-840c-d72ce39f3586", 00:15:41.642 "is_configured": true, 00:15:41.642 "data_offset": 0, 00:15:41.642 "data_size": 65536 00:15:41.642 }, 00:15:41.642 { 00:15:41.642 "name": "BaseBdev4", 00:15:41.642 "uuid": "774a103d-91fd-488a-aad3-1b508828e4ba", 00:15:41.642 "is_configured": true, 00:15:41.642 "data_offset": 0, 00:15:41.642 "data_size": 65536 00:15:41.642 } 00:15:41.642 ] 00:15:41.642 } 00:15:41.642 } 00:15:41.642 }' 00:15:41.642 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:41.903 BaseBdev2 00:15:41.903 BaseBdev3 00:15:41.903 BaseBdev4' 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.903 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.903 [2024-12-06 23:49:53.431959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.164 "name": "Existed_Raid", 00:15:42.164 "uuid": "7165094a-da40-4542-a562-c2340c609ddd", 00:15:42.164 "strip_size_kb": 64, 00:15:42.164 "state": "online", 00:15:42.164 "raid_level": "raid5f", 00:15:42.164 "superblock": false, 00:15:42.164 "num_base_bdevs": 4, 00:15:42.164 "num_base_bdevs_discovered": 3, 00:15:42.164 "num_base_bdevs_operational": 3, 00:15:42.164 "base_bdevs_list": [ 00:15:42.164 { 00:15:42.164 "name": null, 00:15:42.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.164 "is_configured": false, 00:15:42.164 "data_offset": 0, 00:15:42.164 "data_size": 65536 00:15:42.164 }, 00:15:42.164 { 00:15:42.164 "name": "BaseBdev2", 00:15:42.164 "uuid": "5d31d15d-70e2-42b0-9c1f-5f3603bd0735", 00:15:42.164 "is_configured": true, 00:15:42.164 "data_offset": 0, 00:15:42.164 "data_size": 65536 00:15:42.164 }, 00:15:42.164 { 00:15:42.164 "name": "BaseBdev3", 00:15:42.164 "uuid": "c1aa99d7-d30a-4efc-840c-d72ce39f3586", 00:15:42.164 "is_configured": true, 00:15:42.164 "data_offset": 0, 00:15:42.164 "data_size": 65536 00:15:42.164 }, 00:15:42.164 { 00:15:42.164 "name": "BaseBdev4", 00:15:42.164 "uuid": "774a103d-91fd-488a-aad3-1b508828e4ba", 00:15:42.164 "is_configured": true, 00:15:42.164 "data_offset": 0, 00:15:42.164 "data_size": 65536 00:15:42.164 } 00:15:42.164 ] 00:15:42.164 }' 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.164 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.425 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:42.425 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:42.425 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.425 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.425 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.425 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:42.425 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.425 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:42.425 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.425 23:49:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:42.425 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.425 23:49:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.425 [2024-12-06 23:49:53.985805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.425 [2024-12-06 23:49:53.985907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.685 [2024-12-06 23:49:54.075995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.685 [2024-12-06 23:49:54.119908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.685 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.946 [2024-12-06 23:49:54.269140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:42.946 [2024-12-06 23:49:54.269199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.946 BaseBdev2 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.946 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.946 [ 00:15:42.946 { 00:15:42.946 "name": "BaseBdev2", 00:15:42.946 "aliases": [ 00:15:42.946 "b8a698e1-91f5-4925-83ae-f1cf04251367" 00:15:42.946 ], 00:15:42.946 "product_name": "Malloc disk", 00:15:42.946 "block_size": 512, 00:15:42.946 "num_blocks": 65536, 00:15:42.946 "uuid": "b8a698e1-91f5-4925-83ae-f1cf04251367", 00:15:42.946 "assigned_rate_limits": { 00:15:42.946 "rw_ios_per_sec": 0, 00:15:42.946 "rw_mbytes_per_sec": 0, 00:15:42.946 "r_mbytes_per_sec": 0, 00:15:42.946 "w_mbytes_per_sec": 0 00:15:42.946 }, 00:15:42.946 "claimed": false, 00:15:42.946 "zoned": false, 00:15:42.946 "supported_io_types": { 00:15:42.946 "read": true, 00:15:42.946 "write": true, 00:15:42.946 "unmap": true, 00:15:42.946 "flush": true, 00:15:42.946 "reset": true, 00:15:42.946 "nvme_admin": false, 00:15:42.946 "nvme_io": false, 00:15:42.946 "nvme_io_md": false, 00:15:42.946 "write_zeroes": true, 00:15:42.946 "zcopy": true, 00:15:42.946 "get_zone_info": false, 00:15:42.946 "zone_management": false, 00:15:42.946 "zone_append": false, 00:15:42.946 "compare": false, 00:15:42.947 "compare_and_write": false, 00:15:42.947 "abort": true, 00:15:42.947 "seek_hole": false, 00:15:42.947 "seek_data": false, 00:15:42.947 "copy": true, 00:15:42.947 "nvme_iov_md": false 00:15:42.947 }, 00:15:42.947 "memory_domains": [ 00:15:42.947 { 00:15:42.947 "dma_device_id": "system", 00:15:42.947 "dma_device_type": 1 00:15:42.947 }, 00:15:42.947 { 00:15:42.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.947 "dma_device_type": 2 00:15:42.947 } 00:15:42.947 ], 00:15:42.947 "driver_specific": {} 00:15:42.947 } 00:15:42.947 ] 00:15:42.947 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.947 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:42.947 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:42.947 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:42.947 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:42.947 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.947 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.208 BaseBdev3 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.208 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.208 [ 00:15:43.208 { 00:15:43.208 "name": "BaseBdev3", 00:15:43.208 "aliases": [ 00:15:43.208 "532e1aac-4743-4c15-8fb0-852b4bf1b7ae" 00:15:43.208 ], 00:15:43.208 "product_name": "Malloc disk", 00:15:43.208 "block_size": 512, 00:15:43.208 "num_blocks": 65536, 00:15:43.208 "uuid": "532e1aac-4743-4c15-8fb0-852b4bf1b7ae", 00:15:43.208 "assigned_rate_limits": { 00:15:43.208 "rw_ios_per_sec": 0, 00:15:43.208 "rw_mbytes_per_sec": 0, 00:15:43.208 "r_mbytes_per_sec": 0, 00:15:43.208 "w_mbytes_per_sec": 0 00:15:43.208 }, 00:15:43.208 "claimed": false, 00:15:43.208 "zoned": false, 00:15:43.208 "supported_io_types": { 00:15:43.208 "read": true, 00:15:43.208 "write": true, 00:15:43.208 "unmap": true, 00:15:43.208 "flush": true, 00:15:43.208 "reset": true, 00:15:43.208 "nvme_admin": false, 00:15:43.208 "nvme_io": false, 00:15:43.208 "nvme_io_md": false, 00:15:43.208 "write_zeroes": true, 00:15:43.208 "zcopy": true, 00:15:43.208 "get_zone_info": false, 00:15:43.209 "zone_management": false, 00:15:43.209 "zone_append": false, 00:15:43.209 "compare": false, 00:15:43.209 "compare_and_write": false, 00:15:43.209 "abort": true, 00:15:43.209 "seek_hole": false, 00:15:43.209 "seek_data": false, 00:15:43.209 "copy": true, 00:15:43.209 "nvme_iov_md": false 00:15:43.209 }, 00:15:43.209 "memory_domains": [ 00:15:43.209 { 00:15:43.209 "dma_device_id": "system", 00:15:43.209 "dma_device_type": 1 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.209 "dma_device_type": 2 00:15:43.209 } 00:15:43.209 ], 00:15:43.209 "driver_specific": {} 00:15:43.209 } 00:15:43.209 ] 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.209 BaseBdev4 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.209 [ 00:15:43.209 { 00:15:43.209 "name": "BaseBdev4", 00:15:43.209 "aliases": [ 00:15:43.209 "1c1c2aca-8531-4b36-a6db-ce63d6547f0f" 00:15:43.209 ], 00:15:43.209 "product_name": "Malloc disk", 00:15:43.209 "block_size": 512, 00:15:43.209 "num_blocks": 65536, 00:15:43.209 "uuid": "1c1c2aca-8531-4b36-a6db-ce63d6547f0f", 00:15:43.209 "assigned_rate_limits": { 00:15:43.209 "rw_ios_per_sec": 0, 00:15:43.209 "rw_mbytes_per_sec": 0, 00:15:43.209 "r_mbytes_per_sec": 0, 00:15:43.209 "w_mbytes_per_sec": 0 00:15:43.209 }, 00:15:43.209 "claimed": false, 00:15:43.209 "zoned": false, 00:15:43.209 "supported_io_types": { 00:15:43.209 "read": true, 00:15:43.209 "write": true, 00:15:43.209 "unmap": true, 00:15:43.209 "flush": true, 00:15:43.209 "reset": true, 00:15:43.209 "nvme_admin": false, 00:15:43.209 "nvme_io": false, 00:15:43.209 "nvme_io_md": false, 00:15:43.209 "write_zeroes": true, 00:15:43.209 "zcopy": true, 00:15:43.209 "get_zone_info": false, 00:15:43.209 "zone_management": false, 00:15:43.209 "zone_append": false, 00:15:43.209 "compare": false, 00:15:43.209 "compare_and_write": false, 00:15:43.209 "abort": true, 00:15:43.209 "seek_hole": false, 00:15:43.209 "seek_data": false, 00:15:43.209 "copy": true, 00:15:43.209 "nvme_iov_md": false 00:15:43.209 }, 00:15:43.209 "memory_domains": [ 00:15:43.209 { 00:15:43.209 "dma_device_id": "system", 00:15:43.209 "dma_device_type": 1 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.209 "dma_device_type": 2 00:15:43.209 } 00:15:43.209 ], 00:15:43.209 "driver_specific": {} 00:15:43.209 } 00:15:43.209 ] 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.209 [2024-12-06 23:49:54.634809] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.209 [2024-12-06 23:49:54.634855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.209 [2024-12-06 23:49:54.634876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.209 [2024-12-06 23:49:54.636649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.209 [2024-12-06 23:49:54.636727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.209 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.210 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.210 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.210 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.210 "name": "Existed_Raid", 00:15:43.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.210 "strip_size_kb": 64, 00:15:43.210 "state": "configuring", 00:15:43.210 "raid_level": "raid5f", 00:15:43.210 "superblock": false, 00:15:43.210 "num_base_bdevs": 4, 00:15:43.210 "num_base_bdevs_discovered": 3, 00:15:43.210 "num_base_bdevs_operational": 4, 00:15:43.210 "base_bdevs_list": [ 00:15:43.210 { 00:15:43.210 "name": "BaseBdev1", 00:15:43.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.210 "is_configured": false, 00:15:43.210 "data_offset": 0, 00:15:43.210 "data_size": 0 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "name": "BaseBdev2", 00:15:43.210 "uuid": "b8a698e1-91f5-4925-83ae-f1cf04251367", 00:15:43.210 "is_configured": true, 00:15:43.210 "data_offset": 0, 00:15:43.210 "data_size": 65536 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "name": "BaseBdev3", 00:15:43.210 "uuid": "532e1aac-4743-4c15-8fb0-852b4bf1b7ae", 00:15:43.210 "is_configured": true, 00:15:43.210 "data_offset": 0, 00:15:43.210 "data_size": 65536 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "name": "BaseBdev4", 00:15:43.210 "uuid": "1c1c2aca-8531-4b36-a6db-ce63d6547f0f", 00:15:43.210 "is_configured": true, 00:15:43.210 "data_offset": 0, 00:15:43.210 "data_size": 65536 00:15:43.210 } 00:15:43.210 ] 00:15:43.210 }' 00:15:43.210 23:49:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.210 23:49:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.781 [2024-12-06 23:49:55.129922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.781 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.781 "name": "Existed_Raid", 00:15:43.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.781 "strip_size_kb": 64, 00:15:43.781 "state": "configuring", 00:15:43.781 "raid_level": "raid5f", 00:15:43.781 "superblock": false, 00:15:43.781 "num_base_bdevs": 4, 00:15:43.781 "num_base_bdevs_discovered": 2, 00:15:43.782 "num_base_bdevs_operational": 4, 00:15:43.782 "base_bdevs_list": [ 00:15:43.782 { 00:15:43.782 "name": "BaseBdev1", 00:15:43.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.782 "is_configured": false, 00:15:43.782 "data_offset": 0, 00:15:43.782 "data_size": 0 00:15:43.782 }, 00:15:43.782 { 00:15:43.782 "name": null, 00:15:43.782 "uuid": "b8a698e1-91f5-4925-83ae-f1cf04251367", 00:15:43.782 "is_configured": false, 00:15:43.782 "data_offset": 0, 00:15:43.782 "data_size": 65536 00:15:43.782 }, 00:15:43.782 { 00:15:43.782 "name": "BaseBdev3", 00:15:43.782 "uuid": "532e1aac-4743-4c15-8fb0-852b4bf1b7ae", 00:15:43.782 "is_configured": true, 00:15:43.782 "data_offset": 0, 00:15:43.782 "data_size": 65536 00:15:43.782 }, 00:15:43.782 { 00:15:43.782 "name": "BaseBdev4", 00:15:43.782 "uuid": "1c1c2aca-8531-4b36-a6db-ce63d6547f0f", 00:15:43.782 "is_configured": true, 00:15:43.782 "data_offset": 0, 00:15:43.782 "data_size": 65536 00:15:43.782 } 00:15:43.782 ] 00:15:43.782 }' 00:15:43.782 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.782 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.351 [2024-12-06 23:49:55.689166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.351 BaseBdev1 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.351 [ 00:15:44.351 { 00:15:44.351 "name": "BaseBdev1", 00:15:44.351 "aliases": [ 00:15:44.351 "2e54a28e-efa4-4346-9996-5efdeb5fac1c" 00:15:44.351 ], 00:15:44.351 "product_name": "Malloc disk", 00:15:44.351 "block_size": 512, 00:15:44.351 "num_blocks": 65536, 00:15:44.351 "uuid": "2e54a28e-efa4-4346-9996-5efdeb5fac1c", 00:15:44.351 "assigned_rate_limits": { 00:15:44.351 "rw_ios_per_sec": 0, 00:15:44.351 "rw_mbytes_per_sec": 0, 00:15:44.351 "r_mbytes_per_sec": 0, 00:15:44.351 "w_mbytes_per_sec": 0 00:15:44.351 }, 00:15:44.351 "claimed": true, 00:15:44.351 "claim_type": "exclusive_write", 00:15:44.351 "zoned": false, 00:15:44.351 "supported_io_types": { 00:15:44.351 "read": true, 00:15:44.351 "write": true, 00:15:44.351 "unmap": true, 00:15:44.351 "flush": true, 00:15:44.351 "reset": true, 00:15:44.351 "nvme_admin": false, 00:15:44.351 "nvme_io": false, 00:15:44.351 "nvme_io_md": false, 00:15:44.351 "write_zeroes": true, 00:15:44.351 "zcopy": true, 00:15:44.351 "get_zone_info": false, 00:15:44.351 "zone_management": false, 00:15:44.351 "zone_append": false, 00:15:44.351 "compare": false, 00:15:44.351 "compare_and_write": false, 00:15:44.351 "abort": true, 00:15:44.351 "seek_hole": false, 00:15:44.351 "seek_data": false, 00:15:44.351 "copy": true, 00:15:44.351 "nvme_iov_md": false 00:15:44.351 }, 00:15:44.351 "memory_domains": [ 00:15:44.351 { 00:15:44.351 "dma_device_id": "system", 00:15:44.351 "dma_device_type": 1 00:15:44.351 }, 00:15:44.351 { 00:15:44.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.351 "dma_device_type": 2 00:15:44.351 } 00:15:44.351 ], 00:15:44.351 "driver_specific": {} 00:15:44.351 } 00:15:44.351 ] 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.351 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.351 "name": "Existed_Raid", 00:15:44.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.351 "strip_size_kb": 64, 00:15:44.351 "state": "configuring", 00:15:44.351 "raid_level": "raid5f", 00:15:44.351 "superblock": false, 00:15:44.351 "num_base_bdevs": 4, 00:15:44.351 "num_base_bdevs_discovered": 3, 00:15:44.351 "num_base_bdevs_operational": 4, 00:15:44.351 "base_bdevs_list": [ 00:15:44.351 { 00:15:44.351 "name": "BaseBdev1", 00:15:44.351 "uuid": "2e54a28e-efa4-4346-9996-5efdeb5fac1c", 00:15:44.351 "is_configured": true, 00:15:44.351 "data_offset": 0, 00:15:44.351 "data_size": 65536 00:15:44.351 }, 00:15:44.351 { 00:15:44.351 "name": null, 00:15:44.351 "uuid": "b8a698e1-91f5-4925-83ae-f1cf04251367", 00:15:44.351 "is_configured": false, 00:15:44.351 "data_offset": 0, 00:15:44.351 "data_size": 65536 00:15:44.351 }, 00:15:44.351 { 00:15:44.351 "name": "BaseBdev3", 00:15:44.351 "uuid": "532e1aac-4743-4c15-8fb0-852b4bf1b7ae", 00:15:44.351 "is_configured": true, 00:15:44.351 "data_offset": 0, 00:15:44.351 "data_size": 65536 00:15:44.351 }, 00:15:44.351 { 00:15:44.351 "name": "BaseBdev4", 00:15:44.351 "uuid": "1c1c2aca-8531-4b36-a6db-ce63d6547f0f", 00:15:44.351 "is_configured": true, 00:15:44.351 "data_offset": 0, 00:15:44.351 "data_size": 65536 00:15:44.351 } 00:15:44.352 ] 00:15:44.352 }' 00:15:44.352 23:49:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.352 23:49:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.611 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.611 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.611 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.611 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:44.870 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.870 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.871 [2024-12-06 23:49:56.212393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.871 "name": "Existed_Raid", 00:15:44.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.871 "strip_size_kb": 64, 00:15:44.871 "state": "configuring", 00:15:44.871 "raid_level": "raid5f", 00:15:44.871 "superblock": false, 00:15:44.871 "num_base_bdevs": 4, 00:15:44.871 "num_base_bdevs_discovered": 2, 00:15:44.871 "num_base_bdevs_operational": 4, 00:15:44.871 "base_bdevs_list": [ 00:15:44.871 { 00:15:44.871 "name": "BaseBdev1", 00:15:44.871 "uuid": "2e54a28e-efa4-4346-9996-5efdeb5fac1c", 00:15:44.871 "is_configured": true, 00:15:44.871 "data_offset": 0, 00:15:44.871 "data_size": 65536 00:15:44.871 }, 00:15:44.871 { 00:15:44.871 "name": null, 00:15:44.871 "uuid": "b8a698e1-91f5-4925-83ae-f1cf04251367", 00:15:44.871 "is_configured": false, 00:15:44.871 "data_offset": 0, 00:15:44.871 "data_size": 65536 00:15:44.871 }, 00:15:44.871 { 00:15:44.871 "name": null, 00:15:44.871 "uuid": "532e1aac-4743-4c15-8fb0-852b4bf1b7ae", 00:15:44.871 "is_configured": false, 00:15:44.871 "data_offset": 0, 00:15:44.871 "data_size": 65536 00:15:44.871 }, 00:15:44.871 { 00:15:44.871 "name": "BaseBdev4", 00:15:44.871 "uuid": "1c1c2aca-8531-4b36-a6db-ce63d6547f0f", 00:15:44.871 "is_configured": true, 00:15:44.871 "data_offset": 0, 00:15:44.871 "data_size": 65536 00:15:44.871 } 00:15:44.871 ] 00:15:44.871 }' 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.871 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.131 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.131 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:45.131 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.131 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.392 [2024-12-06 23:49:56.731672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.392 "name": "Existed_Raid", 00:15:45.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.392 "strip_size_kb": 64, 00:15:45.392 "state": "configuring", 00:15:45.392 "raid_level": "raid5f", 00:15:45.392 "superblock": false, 00:15:45.392 "num_base_bdevs": 4, 00:15:45.392 "num_base_bdevs_discovered": 3, 00:15:45.392 "num_base_bdevs_operational": 4, 00:15:45.392 "base_bdevs_list": [ 00:15:45.392 { 00:15:45.392 "name": "BaseBdev1", 00:15:45.392 "uuid": "2e54a28e-efa4-4346-9996-5efdeb5fac1c", 00:15:45.392 "is_configured": true, 00:15:45.392 "data_offset": 0, 00:15:45.392 "data_size": 65536 00:15:45.392 }, 00:15:45.392 { 00:15:45.392 "name": null, 00:15:45.392 "uuid": "b8a698e1-91f5-4925-83ae-f1cf04251367", 00:15:45.392 "is_configured": false, 00:15:45.392 "data_offset": 0, 00:15:45.392 "data_size": 65536 00:15:45.392 }, 00:15:45.392 { 00:15:45.392 "name": "BaseBdev3", 00:15:45.392 "uuid": "532e1aac-4743-4c15-8fb0-852b4bf1b7ae", 00:15:45.392 "is_configured": true, 00:15:45.392 "data_offset": 0, 00:15:45.392 "data_size": 65536 00:15:45.392 }, 00:15:45.392 { 00:15:45.392 "name": "BaseBdev4", 00:15:45.392 "uuid": "1c1c2aca-8531-4b36-a6db-ce63d6547f0f", 00:15:45.392 "is_configured": true, 00:15:45.392 "data_offset": 0, 00:15:45.392 "data_size": 65536 00:15:45.392 } 00:15:45.392 ] 00:15:45.392 }' 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.392 23:49:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.653 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.653 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.653 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:45.653 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.653 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.653 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:45.653 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:45.653 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.653 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.653 [2024-12-06 23:49:57.206872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.914 "name": "Existed_Raid", 00:15:45.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.914 "strip_size_kb": 64, 00:15:45.914 "state": "configuring", 00:15:45.914 "raid_level": "raid5f", 00:15:45.914 "superblock": false, 00:15:45.914 "num_base_bdevs": 4, 00:15:45.914 "num_base_bdevs_discovered": 2, 00:15:45.914 "num_base_bdevs_operational": 4, 00:15:45.914 "base_bdevs_list": [ 00:15:45.914 { 00:15:45.914 "name": null, 00:15:45.914 "uuid": "2e54a28e-efa4-4346-9996-5efdeb5fac1c", 00:15:45.914 "is_configured": false, 00:15:45.914 "data_offset": 0, 00:15:45.914 "data_size": 65536 00:15:45.914 }, 00:15:45.914 { 00:15:45.914 "name": null, 00:15:45.914 "uuid": "b8a698e1-91f5-4925-83ae-f1cf04251367", 00:15:45.914 "is_configured": false, 00:15:45.914 "data_offset": 0, 00:15:45.914 "data_size": 65536 00:15:45.914 }, 00:15:45.914 { 00:15:45.914 "name": "BaseBdev3", 00:15:45.914 "uuid": "532e1aac-4743-4c15-8fb0-852b4bf1b7ae", 00:15:45.914 "is_configured": true, 00:15:45.914 "data_offset": 0, 00:15:45.914 "data_size": 65536 00:15:45.914 }, 00:15:45.914 { 00:15:45.914 "name": "BaseBdev4", 00:15:45.914 "uuid": "1c1c2aca-8531-4b36-a6db-ce63d6547f0f", 00:15:45.914 "is_configured": true, 00:15:45.914 "data_offset": 0, 00:15:45.914 "data_size": 65536 00:15:45.914 } 00:15:45.914 ] 00:15:45.914 }' 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.914 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.484 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:46.484 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.484 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.484 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.485 [2024-12-06 23:49:57.839233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.485 "name": "Existed_Raid", 00:15:46.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.485 "strip_size_kb": 64, 00:15:46.485 "state": "configuring", 00:15:46.485 "raid_level": "raid5f", 00:15:46.485 "superblock": false, 00:15:46.485 "num_base_bdevs": 4, 00:15:46.485 "num_base_bdevs_discovered": 3, 00:15:46.485 "num_base_bdevs_operational": 4, 00:15:46.485 "base_bdevs_list": [ 00:15:46.485 { 00:15:46.485 "name": null, 00:15:46.485 "uuid": "2e54a28e-efa4-4346-9996-5efdeb5fac1c", 00:15:46.485 "is_configured": false, 00:15:46.485 "data_offset": 0, 00:15:46.485 "data_size": 65536 00:15:46.485 }, 00:15:46.485 { 00:15:46.485 "name": "BaseBdev2", 00:15:46.485 "uuid": "b8a698e1-91f5-4925-83ae-f1cf04251367", 00:15:46.485 "is_configured": true, 00:15:46.485 "data_offset": 0, 00:15:46.485 "data_size": 65536 00:15:46.485 }, 00:15:46.485 { 00:15:46.485 "name": "BaseBdev3", 00:15:46.485 "uuid": "532e1aac-4743-4c15-8fb0-852b4bf1b7ae", 00:15:46.485 "is_configured": true, 00:15:46.485 "data_offset": 0, 00:15:46.485 "data_size": 65536 00:15:46.485 }, 00:15:46.485 { 00:15:46.485 "name": "BaseBdev4", 00:15:46.485 "uuid": "1c1c2aca-8531-4b36-a6db-ce63d6547f0f", 00:15:46.485 "is_configured": true, 00:15:46.485 "data_offset": 0, 00:15:46.485 "data_size": 65536 00:15:46.485 } 00:15:46.485 ] 00:15:46.485 }' 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.485 23:49:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.744 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:46.744 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.744 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.744 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.005 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.005 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:47.005 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.005 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:47.005 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.005 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.005 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.005 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2e54a28e-efa4-4346-9996-5efdeb5fac1c 00:15:47.005 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.005 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.005 [2024-12-06 23:49:58.400908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:47.006 [2024-12-06 23:49:58.400958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:47.006 [2024-12-06 23:49:58.400966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:47.006 [2024-12-06 23:49:58.401208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:47.006 [2024-12-06 23:49:58.407986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:47.006 [2024-12-06 23:49:58.408054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:47.006 [2024-12-06 23:49:58.408326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.006 NewBaseBdev 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.006 [ 00:15:47.006 { 00:15:47.006 "name": "NewBaseBdev", 00:15:47.006 "aliases": [ 00:15:47.006 "2e54a28e-efa4-4346-9996-5efdeb5fac1c" 00:15:47.006 ], 00:15:47.006 "product_name": "Malloc disk", 00:15:47.006 "block_size": 512, 00:15:47.006 "num_blocks": 65536, 00:15:47.006 "uuid": "2e54a28e-efa4-4346-9996-5efdeb5fac1c", 00:15:47.006 "assigned_rate_limits": { 00:15:47.006 "rw_ios_per_sec": 0, 00:15:47.006 "rw_mbytes_per_sec": 0, 00:15:47.006 "r_mbytes_per_sec": 0, 00:15:47.006 "w_mbytes_per_sec": 0 00:15:47.006 }, 00:15:47.006 "claimed": true, 00:15:47.006 "claim_type": "exclusive_write", 00:15:47.006 "zoned": false, 00:15:47.006 "supported_io_types": { 00:15:47.006 "read": true, 00:15:47.006 "write": true, 00:15:47.006 "unmap": true, 00:15:47.006 "flush": true, 00:15:47.006 "reset": true, 00:15:47.006 "nvme_admin": false, 00:15:47.006 "nvme_io": false, 00:15:47.006 "nvme_io_md": false, 00:15:47.006 "write_zeroes": true, 00:15:47.006 "zcopy": true, 00:15:47.006 "get_zone_info": false, 00:15:47.006 "zone_management": false, 00:15:47.006 "zone_append": false, 00:15:47.006 "compare": false, 00:15:47.006 "compare_and_write": false, 00:15:47.006 "abort": true, 00:15:47.006 "seek_hole": false, 00:15:47.006 "seek_data": false, 00:15:47.006 "copy": true, 00:15:47.006 "nvme_iov_md": false 00:15:47.006 }, 00:15:47.006 "memory_domains": [ 00:15:47.006 { 00:15:47.006 "dma_device_id": "system", 00:15:47.006 "dma_device_type": 1 00:15:47.006 }, 00:15:47.006 { 00:15:47.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.006 "dma_device_type": 2 00:15:47.006 } 00:15:47.006 ], 00:15:47.006 "driver_specific": {} 00:15:47.006 } 00:15:47.006 ] 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.006 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.006 "name": "Existed_Raid", 00:15:47.006 "uuid": "ef51162b-8c96-404d-b5a6-c19ae0ec7bca", 00:15:47.006 "strip_size_kb": 64, 00:15:47.006 "state": "online", 00:15:47.006 "raid_level": "raid5f", 00:15:47.006 "superblock": false, 00:15:47.006 "num_base_bdevs": 4, 00:15:47.006 "num_base_bdevs_discovered": 4, 00:15:47.006 "num_base_bdevs_operational": 4, 00:15:47.006 "base_bdevs_list": [ 00:15:47.006 { 00:15:47.006 "name": "NewBaseBdev", 00:15:47.006 "uuid": "2e54a28e-efa4-4346-9996-5efdeb5fac1c", 00:15:47.006 "is_configured": true, 00:15:47.006 "data_offset": 0, 00:15:47.006 "data_size": 65536 00:15:47.006 }, 00:15:47.006 { 00:15:47.006 "name": "BaseBdev2", 00:15:47.006 "uuid": "b8a698e1-91f5-4925-83ae-f1cf04251367", 00:15:47.006 "is_configured": true, 00:15:47.006 "data_offset": 0, 00:15:47.006 "data_size": 65536 00:15:47.006 }, 00:15:47.006 { 00:15:47.006 "name": "BaseBdev3", 00:15:47.006 "uuid": "532e1aac-4743-4c15-8fb0-852b4bf1b7ae", 00:15:47.006 "is_configured": true, 00:15:47.006 "data_offset": 0, 00:15:47.006 "data_size": 65536 00:15:47.006 }, 00:15:47.006 { 00:15:47.007 "name": "BaseBdev4", 00:15:47.007 "uuid": "1c1c2aca-8531-4b36-a6db-ce63d6547f0f", 00:15:47.007 "is_configured": true, 00:15:47.007 "data_offset": 0, 00:15:47.007 "data_size": 65536 00:15:47.007 } 00:15:47.007 ] 00:15:47.007 }' 00:15:47.007 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.007 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.578 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:47.578 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:47.578 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:47.578 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:47.578 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:47.578 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:47.578 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:47.578 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:47.579 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.579 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.579 [2024-12-06 23:49:58.927708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.579 23:49:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.579 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.579 "name": "Existed_Raid", 00:15:47.579 "aliases": [ 00:15:47.579 "ef51162b-8c96-404d-b5a6-c19ae0ec7bca" 00:15:47.579 ], 00:15:47.579 "product_name": "Raid Volume", 00:15:47.579 "block_size": 512, 00:15:47.579 "num_blocks": 196608, 00:15:47.579 "uuid": "ef51162b-8c96-404d-b5a6-c19ae0ec7bca", 00:15:47.579 "assigned_rate_limits": { 00:15:47.579 "rw_ios_per_sec": 0, 00:15:47.579 "rw_mbytes_per_sec": 0, 00:15:47.579 "r_mbytes_per_sec": 0, 00:15:47.579 "w_mbytes_per_sec": 0 00:15:47.579 }, 00:15:47.579 "claimed": false, 00:15:47.579 "zoned": false, 00:15:47.579 "supported_io_types": { 00:15:47.579 "read": true, 00:15:47.579 "write": true, 00:15:47.579 "unmap": false, 00:15:47.579 "flush": false, 00:15:47.579 "reset": true, 00:15:47.579 "nvme_admin": false, 00:15:47.579 "nvme_io": false, 00:15:47.579 "nvme_io_md": false, 00:15:47.579 "write_zeroes": true, 00:15:47.579 "zcopy": false, 00:15:47.579 "get_zone_info": false, 00:15:47.579 "zone_management": false, 00:15:47.579 "zone_append": false, 00:15:47.579 "compare": false, 00:15:47.579 "compare_and_write": false, 00:15:47.579 "abort": false, 00:15:47.579 "seek_hole": false, 00:15:47.579 "seek_data": false, 00:15:47.579 "copy": false, 00:15:47.579 "nvme_iov_md": false 00:15:47.579 }, 00:15:47.579 "driver_specific": { 00:15:47.579 "raid": { 00:15:47.579 "uuid": "ef51162b-8c96-404d-b5a6-c19ae0ec7bca", 00:15:47.579 "strip_size_kb": 64, 00:15:47.579 "state": "online", 00:15:47.579 "raid_level": "raid5f", 00:15:47.579 "superblock": false, 00:15:47.579 "num_base_bdevs": 4, 00:15:47.579 "num_base_bdevs_discovered": 4, 00:15:47.579 "num_base_bdevs_operational": 4, 00:15:47.579 "base_bdevs_list": [ 00:15:47.579 { 00:15:47.579 "name": "NewBaseBdev", 00:15:47.579 "uuid": "2e54a28e-efa4-4346-9996-5efdeb5fac1c", 00:15:47.579 "is_configured": true, 00:15:47.579 "data_offset": 0, 00:15:47.579 "data_size": 65536 00:15:47.579 }, 00:15:47.579 { 00:15:47.579 "name": "BaseBdev2", 00:15:47.579 "uuid": "b8a698e1-91f5-4925-83ae-f1cf04251367", 00:15:47.579 "is_configured": true, 00:15:47.579 "data_offset": 0, 00:15:47.579 "data_size": 65536 00:15:47.579 }, 00:15:47.579 { 00:15:47.579 "name": "BaseBdev3", 00:15:47.579 "uuid": "532e1aac-4743-4c15-8fb0-852b4bf1b7ae", 00:15:47.579 "is_configured": true, 00:15:47.579 "data_offset": 0, 00:15:47.579 "data_size": 65536 00:15:47.579 }, 00:15:47.579 { 00:15:47.579 "name": "BaseBdev4", 00:15:47.579 "uuid": "1c1c2aca-8531-4b36-a6db-ce63d6547f0f", 00:15:47.579 "is_configured": true, 00:15:47.579 "data_offset": 0, 00:15:47.579 "data_size": 65536 00:15:47.579 } 00:15:47.579 ] 00:15:47.579 } 00:15:47.579 } 00:15:47.579 }' 00:15:47.579 23:49:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:47.579 BaseBdev2 00:15:47.579 BaseBdev3 00:15:47.579 BaseBdev4' 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.579 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.840 [2024-12-06 23:49:59.254931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.840 [2024-12-06 23:49:59.254954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.840 [2024-12-06 23:49:59.255013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.840 [2024-12-06 23:49:59.255287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.840 [2024-12-06 23:49:59.255297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82675 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82675 ']' 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82675 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82675 00:15:47.840 killing process with pid 82675 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.840 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.841 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82675' 00:15:47.841 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82675 00:15:47.841 [2024-12-06 23:49:59.305447] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.841 23:49:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82675 00:15:48.411 [2024-12-06 23:49:59.670988] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.349 ************************************ 00:15:49.349 END TEST raid5f_state_function_test 00:15:49.349 ************************************ 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:49.349 00:15:49.349 real 0m11.557s 00:15:49.349 user 0m18.406s 00:15:49.349 sys 0m2.280s 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.349 23:50:00 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:49.349 23:50:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:49.349 23:50:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.349 23:50:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.349 ************************************ 00:15:49.349 START TEST raid5f_state_function_test_sb 00:15:49.349 ************************************ 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:49.349 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:49.350 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:49.350 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:49.350 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83344 00:15:49.350 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:49.350 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83344' 00:15:49.350 Process raid pid: 83344 00:15:49.350 23:50:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83344 00:15:49.350 23:50:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83344 ']' 00:15:49.350 23:50:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.350 23:50:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.350 23:50:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.350 23:50:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.350 23:50:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.609 [2024-12-06 23:50:00.929205] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:15:49.610 [2024-12-06 23:50:00.929402] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.610 [2024-12-06 23:50:01.110486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.869 [2024-12-06 23:50:01.214521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.869 [2024-12-06 23:50:01.404779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.869 [2024-12-06 23:50:01.404862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.438 [2024-12-06 23:50:01.735106] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.438 [2024-12-06 23:50:01.735163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.438 [2024-12-06 23:50:01.735173] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.438 [2024-12-06 23:50:01.735198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.438 [2024-12-06 23:50:01.735203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.438 [2024-12-06 23:50:01.735211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.438 [2024-12-06 23:50:01.735217] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:50.438 [2024-12-06 23:50:01.735225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.438 "name": "Existed_Raid", 00:15:50.438 "uuid": "9b41718e-0c17-4883-bae5-92d3c24a4e94", 00:15:50.438 "strip_size_kb": 64, 00:15:50.438 "state": "configuring", 00:15:50.438 "raid_level": "raid5f", 00:15:50.438 "superblock": true, 00:15:50.438 "num_base_bdevs": 4, 00:15:50.438 "num_base_bdevs_discovered": 0, 00:15:50.438 "num_base_bdevs_operational": 4, 00:15:50.438 "base_bdevs_list": [ 00:15:50.438 { 00:15:50.438 "name": "BaseBdev1", 00:15:50.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.438 "is_configured": false, 00:15:50.438 "data_offset": 0, 00:15:50.438 "data_size": 0 00:15:50.438 }, 00:15:50.438 { 00:15:50.438 "name": "BaseBdev2", 00:15:50.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.438 "is_configured": false, 00:15:50.438 "data_offset": 0, 00:15:50.438 "data_size": 0 00:15:50.438 }, 00:15:50.438 { 00:15:50.438 "name": "BaseBdev3", 00:15:50.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.438 "is_configured": false, 00:15:50.438 "data_offset": 0, 00:15:50.438 "data_size": 0 00:15:50.438 }, 00:15:50.438 { 00:15:50.438 "name": "BaseBdev4", 00:15:50.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.438 "is_configured": false, 00:15:50.438 "data_offset": 0, 00:15:50.438 "data_size": 0 00:15:50.438 } 00:15:50.438 ] 00:15:50.438 }' 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.438 23:50:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.698 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.698 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.698 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.698 [2024-12-06 23:50:02.210198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.698 [2024-12-06 23:50:02.210289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:50.698 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.698 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:50.698 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.698 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.698 [2024-12-06 23:50:02.222187] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.698 [2024-12-06 23:50:02.222279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.698 [2024-12-06 23:50:02.222305] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.698 [2024-12-06 23:50:02.222327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.698 [2024-12-06 23:50:02.222344] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.698 [2024-12-06 23:50:02.222363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.698 [2024-12-06 23:50:02.222380] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:50.698 [2024-12-06 23:50:02.222400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:50.698 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.698 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:50.698 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.698 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.958 [2024-12-06 23:50:02.270295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.958 BaseBdev1 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.958 [ 00:15:50.958 { 00:15:50.958 "name": "BaseBdev1", 00:15:50.958 "aliases": [ 00:15:50.958 "032a1d2b-343f-4e10-b47d-b8df5d1df789" 00:15:50.958 ], 00:15:50.958 "product_name": "Malloc disk", 00:15:50.958 "block_size": 512, 00:15:50.958 "num_blocks": 65536, 00:15:50.958 "uuid": "032a1d2b-343f-4e10-b47d-b8df5d1df789", 00:15:50.958 "assigned_rate_limits": { 00:15:50.958 "rw_ios_per_sec": 0, 00:15:50.958 "rw_mbytes_per_sec": 0, 00:15:50.958 "r_mbytes_per_sec": 0, 00:15:50.958 "w_mbytes_per_sec": 0 00:15:50.958 }, 00:15:50.958 "claimed": true, 00:15:50.958 "claim_type": "exclusive_write", 00:15:50.958 "zoned": false, 00:15:50.958 "supported_io_types": { 00:15:50.958 "read": true, 00:15:50.958 "write": true, 00:15:50.958 "unmap": true, 00:15:50.958 "flush": true, 00:15:50.958 "reset": true, 00:15:50.958 "nvme_admin": false, 00:15:50.958 "nvme_io": false, 00:15:50.958 "nvme_io_md": false, 00:15:50.958 "write_zeroes": true, 00:15:50.958 "zcopy": true, 00:15:50.958 "get_zone_info": false, 00:15:50.958 "zone_management": false, 00:15:50.958 "zone_append": false, 00:15:50.958 "compare": false, 00:15:50.958 "compare_and_write": false, 00:15:50.958 "abort": true, 00:15:50.958 "seek_hole": false, 00:15:50.958 "seek_data": false, 00:15:50.958 "copy": true, 00:15:50.958 "nvme_iov_md": false 00:15:50.958 }, 00:15:50.958 "memory_domains": [ 00:15:50.958 { 00:15:50.958 "dma_device_id": "system", 00:15:50.958 "dma_device_type": 1 00:15:50.958 }, 00:15:50.958 { 00:15:50.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.958 "dma_device_type": 2 00:15:50.958 } 00:15:50.958 ], 00:15:50.958 "driver_specific": {} 00:15:50.958 } 00:15:50.958 ] 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.958 "name": "Existed_Raid", 00:15:50.958 "uuid": "85c06f67-258c-41f9-93d8-66aaa7720cde", 00:15:50.958 "strip_size_kb": 64, 00:15:50.958 "state": "configuring", 00:15:50.958 "raid_level": "raid5f", 00:15:50.958 "superblock": true, 00:15:50.958 "num_base_bdevs": 4, 00:15:50.958 "num_base_bdevs_discovered": 1, 00:15:50.958 "num_base_bdevs_operational": 4, 00:15:50.958 "base_bdevs_list": [ 00:15:50.958 { 00:15:50.958 "name": "BaseBdev1", 00:15:50.958 "uuid": "032a1d2b-343f-4e10-b47d-b8df5d1df789", 00:15:50.958 "is_configured": true, 00:15:50.958 "data_offset": 2048, 00:15:50.958 "data_size": 63488 00:15:50.958 }, 00:15:50.958 { 00:15:50.958 "name": "BaseBdev2", 00:15:50.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.958 "is_configured": false, 00:15:50.958 "data_offset": 0, 00:15:50.958 "data_size": 0 00:15:50.958 }, 00:15:50.958 { 00:15:50.958 "name": "BaseBdev3", 00:15:50.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.958 "is_configured": false, 00:15:50.958 "data_offset": 0, 00:15:50.958 "data_size": 0 00:15:50.958 }, 00:15:50.958 { 00:15:50.958 "name": "BaseBdev4", 00:15:50.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.958 "is_configured": false, 00:15:50.958 "data_offset": 0, 00:15:50.958 "data_size": 0 00:15:50.958 } 00:15:50.958 ] 00:15:50.958 }' 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.958 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.528 [2024-12-06 23:50:02.789422] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.528 [2024-12-06 23:50:02.789503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.528 [2024-12-06 23:50:02.801459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.528 [2024-12-06 23:50:02.803233] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.528 [2024-12-06 23:50:02.803276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.528 [2024-12-06 23:50:02.803285] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:51.528 [2024-12-06 23:50:02.803295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:51.528 [2024-12-06 23:50:02.803301] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:51.528 [2024-12-06 23:50:02.803309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.528 "name": "Existed_Raid", 00:15:51.528 "uuid": "617426ed-1afc-4b1d-948e-b74703aa1872", 00:15:51.528 "strip_size_kb": 64, 00:15:51.528 "state": "configuring", 00:15:51.528 "raid_level": "raid5f", 00:15:51.528 "superblock": true, 00:15:51.528 "num_base_bdevs": 4, 00:15:51.528 "num_base_bdevs_discovered": 1, 00:15:51.528 "num_base_bdevs_operational": 4, 00:15:51.528 "base_bdevs_list": [ 00:15:51.528 { 00:15:51.528 "name": "BaseBdev1", 00:15:51.528 "uuid": "032a1d2b-343f-4e10-b47d-b8df5d1df789", 00:15:51.528 "is_configured": true, 00:15:51.528 "data_offset": 2048, 00:15:51.528 "data_size": 63488 00:15:51.528 }, 00:15:51.528 { 00:15:51.528 "name": "BaseBdev2", 00:15:51.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.528 "is_configured": false, 00:15:51.528 "data_offset": 0, 00:15:51.528 "data_size": 0 00:15:51.528 }, 00:15:51.528 { 00:15:51.528 "name": "BaseBdev3", 00:15:51.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.528 "is_configured": false, 00:15:51.528 "data_offset": 0, 00:15:51.528 "data_size": 0 00:15:51.528 }, 00:15:51.528 { 00:15:51.528 "name": "BaseBdev4", 00:15:51.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.528 "is_configured": false, 00:15:51.528 "data_offset": 0, 00:15:51.528 "data_size": 0 00:15:51.528 } 00:15:51.528 ] 00:15:51.528 }' 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.528 23:50:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.789 [2024-12-06 23:50:03.301562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.789 BaseBdev2 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.789 [ 00:15:51.789 { 00:15:51.789 "name": "BaseBdev2", 00:15:51.789 "aliases": [ 00:15:51.789 "b23f4d29-403c-49aa-9ced-4edfe9c3a339" 00:15:51.789 ], 00:15:51.789 "product_name": "Malloc disk", 00:15:51.789 "block_size": 512, 00:15:51.789 "num_blocks": 65536, 00:15:51.789 "uuid": "b23f4d29-403c-49aa-9ced-4edfe9c3a339", 00:15:51.789 "assigned_rate_limits": { 00:15:51.789 "rw_ios_per_sec": 0, 00:15:51.789 "rw_mbytes_per_sec": 0, 00:15:51.789 "r_mbytes_per_sec": 0, 00:15:51.789 "w_mbytes_per_sec": 0 00:15:51.789 }, 00:15:51.789 "claimed": true, 00:15:51.789 "claim_type": "exclusive_write", 00:15:51.789 "zoned": false, 00:15:51.789 "supported_io_types": { 00:15:51.789 "read": true, 00:15:51.789 "write": true, 00:15:51.789 "unmap": true, 00:15:51.789 "flush": true, 00:15:51.789 "reset": true, 00:15:51.789 "nvme_admin": false, 00:15:51.789 "nvme_io": false, 00:15:51.789 "nvme_io_md": false, 00:15:51.789 "write_zeroes": true, 00:15:51.789 "zcopy": true, 00:15:51.789 "get_zone_info": false, 00:15:51.789 "zone_management": false, 00:15:51.789 "zone_append": false, 00:15:51.789 "compare": false, 00:15:51.789 "compare_and_write": false, 00:15:51.789 "abort": true, 00:15:51.789 "seek_hole": false, 00:15:51.789 "seek_data": false, 00:15:51.789 "copy": true, 00:15:51.789 "nvme_iov_md": false 00:15:51.789 }, 00:15:51.789 "memory_domains": [ 00:15:51.789 { 00:15:51.789 "dma_device_id": "system", 00:15:51.789 "dma_device_type": 1 00:15:51.789 }, 00:15:51.789 { 00:15:51.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.789 "dma_device_type": 2 00:15:51.789 } 00:15:51.789 ], 00:15:51.789 "driver_specific": {} 00:15:51.789 } 00:15:51.789 ] 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.789 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.049 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.049 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.049 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.049 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.049 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.049 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.049 "name": "Existed_Raid", 00:15:52.049 "uuid": "617426ed-1afc-4b1d-948e-b74703aa1872", 00:15:52.049 "strip_size_kb": 64, 00:15:52.049 "state": "configuring", 00:15:52.049 "raid_level": "raid5f", 00:15:52.049 "superblock": true, 00:15:52.049 "num_base_bdevs": 4, 00:15:52.049 "num_base_bdevs_discovered": 2, 00:15:52.049 "num_base_bdevs_operational": 4, 00:15:52.049 "base_bdevs_list": [ 00:15:52.049 { 00:15:52.049 "name": "BaseBdev1", 00:15:52.049 "uuid": "032a1d2b-343f-4e10-b47d-b8df5d1df789", 00:15:52.049 "is_configured": true, 00:15:52.049 "data_offset": 2048, 00:15:52.049 "data_size": 63488 00:15:52.049 }, 00:15:52.049 { 00:15:52.049 "name": "BaseBdev2", 00:15:52.049 "uuid": "b23f4d29-403c-49aa-9ced-4edfe9c3a339", 00:15:52.049 "is_configured": true, 00:15:52.049 "data_offset": 2048, 00:15:52.049 "data_size": 63488 00:15:52.049 }, 00:15:52.049 { 00:15:52.049 "name": "BaseBdev3", 00:15:52.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.049 "is_configured": false, 00:15:52.049 "data_offset": 0, 00:15:52.049 "data_size": 0 00:15:52.049 }, 00:15:52.049 { 00:15:52.049 "name": "BaseBdev4", 00:15:52.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.049 "is_configured": false, 00:15:52.049 "data_offset": 0, 00:15:52.049 "data_size": 0 00:15:52.049 } 00:15:52.049 ] 00:15:52.049 }' 00:15:52.049 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.049 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.309 [2024-12-06 23:50:03.847288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:52.309 BaseBdev3 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.309 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.569 [ 00:15:52.569 { 00:15:52.569 "name": "BaseBdev3", 00:15:52.569 "aliases": [ 00:15:52.569 "adcf5de4-4774-4056-98fd-123bffd1b39b" 00:15:52.569 ], 00:15:52.569 "product_name": "Malloc disk", 00:15:52.569 "block_size": 512, 00:15:52.569 "num_blocks": 65536, 00:15:52.569 "uuid": "adcf5de4-4774-4056-98fd-123bffd1b39b", 00:15:52.569 "assigned_rate_limits": { 00:15:52.569 "rw_ios_per_sec": 0, 00:15:52.569 "rw_mbytes_per_sec": 0, 00:15:52.569 "r_mbytes_per_sec": 0, 00:15:52.569 "w_mbytes_per_sec": 0 00:15:52.569 }, 00:15:52.569 "claimed": true, 00:15:52.569 "claim_type": "exclusive_write", 00:15:52.569 "zoned": false, 00:15:52.569 "supported_io_types": { 00:15:52.569 "read": true, 00:15:52.569 "write": true, 00:15:52.569 "unmap": true, 00:15:52.569 "flush": true, 00:15:52.569 "reset": true, 00:15:52.569 "nvme_admin": false, 00:15:52.569 "nvme_io": false, 00:15:52.569 "nvme_io_md": false, 00:15:52.569 "write_zeroes": true, 00:15:52.569 "zcopy": true, 00:15:52.569 "get_zone_info": false, 00:15:52.569 "zone_management": false, 00:15:52.569 "zone_append": false, 00:15:52.569 "compare": false, 00:15:52.569 "compare_and_write": false, 00:15:52.569 "abort": true, 00:15:52.569 "seek_hole": false, 00:15:52.569 "seek_data": false, 00:15:52.569 "copy": true, 00:15:52.569 "nvme_iov_md": false 00:15:52.569 }, 00:15:52.569 "memory_domains": [ 00:15:52.569 { 00:15:52.569 "dma_device_id": "system", 00:15:52.569 "dma_device_type": 1 00:15:52.569 }, 00:15:52.569 { 00:15:52.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.569 "dma_device_type": 2 00:15:52.569 } 00:15:52.569 ], 00:15:52.569 "driver_specific": {} 00:15:52.569 } 00:15:52.569 ] 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.569 "name": "Existed_Raid", 00:15:52.569 "uuid": "617426ed-1afc-4b1d-948e-b74703aa1872", 00:15:52.569 "strip_size_kb": 64, 00:15:52.569 "state": "configuring", 00:15:52.569 "raid_level": "raid5f", 00:15:52.569 "superblock": true, 00:15:52.569 "num_base_bdevs": 4, 00:15:52.569 "num_base_bdevs_discovered": 3, 00:15:52.569 "num_base_bdevs_operational": 4, 00:15:52.569 "base_bdevs_list": [ 00:15:52.569 { 00:15:52.569 "name": "BaseBdev1", 00:15:52.569 "uuid": "032a1d2b-343f-4e10-b47d-b8df5d1df789", 00:15:52.569 "is_configured": true, 00:15:52.569 "data_offset": 2048, 00:15:52.569 "data_size": 63488 00:15:52.569 }, 00:15:52.569 { 00:15:52.569 "name": "BaseBdev2", 00:15:52.569 "uuid": "b23f4d29-403c-49aa-9ced-4edfe9c3a339", 00:15:52.569 "is_configured": true, 00:15:52.569 "data_offset": 2048, 00:15:52.569 "data_size": 63488 00:15:52.569 }, 00:15:52.569 { 00:15:52.569 "name": "BaseBdev3", 00:15:52.569 "uuid": "adcf5de4-4774-4056-98fd-123bffd1b39b", 00:15:52.569 "is_configured": true, 00:15:52.569 "data_offset": 2048, 00:15:52.569 "data_size": 63488 00:15:52.569 }, 00:15:52.569 { 00:15:52.569 "name": "BaseBdev4", 00:15:52.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.569 "is_configured": false, 00:15:52.569 "data_offset": 0, 00:15:52.569 "data_size": 0 00:15:52.569 } 00:15:52.569 ] 00:15:52.569 }' 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.569 23:50:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.829 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:52.829 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.829 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.090 [2024-12-06 23:50:04.407928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:53.090 [2024-12-06 23:50:04.408291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:53.090 [2024-12-06 23:50:04.408343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:53.090 [2024-12-06 23:50:04.408621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:53.090 BaseBdev4 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.090 [2024-12-06 23:50:04.416186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:53.090 [2024-12-06 23:50:04.416247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:53.090 [2024-12-06 23:50:04.416540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.090 [ 00:15:53.090 { 00:15:53.090 "name": "BaseBdev4", 00:15:53.090 "aliases": [ 00:15:53.090 "b2447f0e-382b-44c0-a8b8-42454585adea" 00:15:53.090 ], 00:15:53.090 "product_name": "Malloc disk", 00:15:53.090 "block_size": 512, 00:15:53.090 "num_blocks": 65536, 00:15:53.090 "uuid": "b2447f0e-382b-44c0-a8b8-42454585adea", 00:15:53.090 "assigned_rate_limits": { 00:15:53.090 "rw_ios_per_sec": 0, 00:15:53.090 "rw_mbytes_per_sec": 0, 00:15:53.090 "r_mbytes_per_sec": 0, 00:15:53.090 "w_mbytes_per_sec": 0 00:15:53.090 }, 00:15:53.090 "claimed": true, 00:15:53.090 "claim_type": "exclusive_write", 00:15:53.090 "zoned": false, 00:15:53.090 "supported_io_types": { 00:15:53.090 "read": true, 00:15:53.090 "write": true, 00:15:53.090 "unmap": true, 00:15:53.090 "flush": true, 00:15:53.090 "reset": true, 00:15:53.090 "nvme_admin": false, 00:15:53.090 "nvme_io": false, 00:15:53.090 "nvme_io_md": false, 00:15:53.090 "write_zeroes": true, 00:15:53.090 "zcopy": true, 00:15:53.090 "get_zone_info": false, 00:15:53.090 "zone_management": false, 00:15:53.090 "zone_append": false, 00:15:53.090 "compare": false, 00:15:53.090 "compare_and_write": false, 00:15:53.090 "abort": true, 00:15:53.090 "seek_hole": false, 00:15:53.090 "seek_data": false, 00:15:53.090 "copy": true, 00:15:53.090 "nvme_iov_md": false 00:15:53.090 }, 00:15:53.090 "memory_domains": [ 00:15:53.090 { 00:15:53.090 "dma_device_id": "system", 00:15:53.090 "dma_device_type": 1 00:15:53.090 }, 00:15:53.090 { 00:15:53.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.090 "dma_device_type": 2 00:15:53.090 } 00:15:53.090 ], 00:15:53.090 "driver_specific": {} 00:15:53.090 } 00:15:53.090 ] 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.090 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.091 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.091 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.091 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.091 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.091 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.091 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.091 "name": "Existed_Raid", 00:15:53.091 "uuid": "617426ed-1afc-4b1d-948e-b74703aa1872", 00:15:53.091 "strip_size_kb": 64, 00:15:53.091 "state": "online", 00:15:53.091 "raid_level": "raid5f", 00:15:53.091 "superblock": true, 00:15:53.091 "num_base_bdevs": 4, 00:15:53.091 "num_base_bdevs_discovered": 4, 00:15:53.091 "num_base_bdevs_operational": 4, 00:15:53.091 "base_bdevs_list": [ 00:15:53.091 { 00:15:53.091 "name": "BaseBdev1", 00:15:53.091 "uuid": "032a1d2b-343f-4e10-b47d-b8df5d1df789", 00:15:53.091 "is_configured": true, 00:15:53.091 "data_offset": 2048, 00:15:53.091 "data_size": 63488 00:15:53.091 }, 00:15:53.091 { 00:15:53.091 "name": "BaseBdev2", 00:15:53.091 "uuid": "b23f4d29-403c-49aa-9ced-4edfe9c3a339", 00:15:53.091 "is_configured": true, 00:15:53.091 "data_offset": 2048, 00:15:53.091 "data_size": 63488 00:15:53.091 }, 00:15:53.091 { 00:15:53.091 "name": "BaseBdev3", 00:15:53.091 "uuid": "adcf5de4-4774-4056-98fd-123bffd1b39b", 00:15:53.091 "is_configured": true, 00:15:53.091 "data_offset": 2048, 00:15:53.091 "data_size": 63488 00:15:53.091 }, 00:15:53.091 { 00:15:53.091 "name": "BaseBdev4", 00:15:53.091 "uuid": "b2447f0e-382b-44c0-a8b8-42454585adea", 00:15:53.091 "is_configured": true, 00:15:53.091 "data_offset": 2048, 00:15:53.091 "data_size": 63488 00:15:53.091 } 00:15:53.091 ] 00:15:53.091 }' 00:15:53.091 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.091 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.661 [2024-12-06 23:50:04.943773] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:53.661 "name": "Existed_Raid", 00:15:53.661 "aliases": [ 00:15:53.661 "617426ed-1afc-4b1d-948e-b74703aa1872" 00:15:53.661 ], 00:15:53.661 "product_name": "Raid Volume", 00:15:53.661 "block_size": 512, 00:15:53.661 "num_blocks": 190464, 00:15:53.661 "uuid": "617426ed-1afc-4b1d-948e-b74703aa1872", 00:15:53.661 "assigned_rate_limits": { 00:15:53.661 "rw_ios_per_sec": 0, 00:15:53.661 "rw_mbytes_per_sec": 0, 00:15:53.661 "r_mbytes_per_sec": 0, 00:15:53.661 "w_mbytes_per_sec": 0 00:15:53.661 }, 00:15:53.661 "claimed": false, 00:15:53.661 "zoned": false, 00:15:53.661 "supported_io_types": { 00:15:53.661 "read": true, 00:15:53.661 "write": true, 00:15:53.661 "unmap": false, 00:15:53.661 "flush": false, 00:15:53.661 "reset": true, 00:15:53.661 "nvme_admin": false, 00:15:53.661 "nvme_io": false, 00:15:53.661 "nvme_io_md": false, 00:15:53.661 "write_zeroes": true, 00:15:53.661 "zcopy": false, 00:15:53.661 "get_zone_info": false, 00:15:53.661 "zone_management": false, 00:15:53.661 "zone_append": false, 00:15:53.661 "compare": false, 00:15:53.661 "compare_and_write": false, 00:15:53.661 "abort": false, 00:15:53.661 "seek_hole": false, 00:15:53.661 "seek_data": false, 00:15:53.661 "copy": false, 00:15:53.661 "nvme_iov_md": false 00:15:53.661 }, 00:15:53.661 "driver_specific": { 00:15:53.661 "raid": { 00:15:53.661 "uuid": "617426ed-1afc-4b1d-948e-b74703aa1872", 00:15:53.661 "strip_size_kb": 64, 00:15:53.661 "state": "online", 00:15:53.661 "raid_level": "raid5f", 00:15:53.661 "superblock": true, 00:15:53.661 "num_base_bdevs": 4, 00:15:53.661 "num_base_bdevs_discovered": 4, 00:15:53.661 "num_base_bdevs_operational": 4, 00:15:53.661 "base_bdevs_list": [ 00:15:53.661 { 00:15:53.661 "name": "BaseBdev1", 00:15:53.661 "uuid": "032a1d2b-343f-4e10-b47d-b8df5d1df789", 00:15:53.661 "is_configured": true, 00:15:53.661 "data_offset": 2048, 00:15:53.661 "data_size": 63488 00:15:53.661 }, 00:15:53.661 { 00:15:53.661 "name": "BaseBdev2", 00:15:53.661 "uuid": "b23f4d29-403c-49aa-9ced-4edfe9c3a339", 00:15:53.661 "is_configured": true, 00:15:53.661 "data_offset": 2048, 00:15:53.661 "data_size": 63488 00:15:53.661 }, 00:15:53.661 { 00:15:53.661 "name": "BaseBdev3", 00:15:53.661 "uuid": "adcf5de4-4774-4056-98fd-123bffd1b39b", 00:15:53.661 "is_configured": true, 00:15:53.661 "data_offset": 2048, 00:15:53.661 "data_size": 63488 00:15:53.661 }, 00:15:53.661 { 00:15:53.661 "name": "BaseBdev4", 00:15:53.661 "uuid": "b2447f0e-382b-44c0-a8b8-42454585adea", 00:15:53.661 "is_configured": true, 00:15:53.661 "data_offset": 2048, 00:15:53.661 "data_size": 63488 00:15:53.661 } 00:15:53.661 ] 00:15:53.661 } 00:15:53.661 } 00:15:53.661 }' 00:15:53.661 23:50:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:53.662 BaseBdev2 00:15:53.662 BaseBdev3 00:15:53.662 BaseBdev4' 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.662 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.921 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.921 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.921 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.921 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.921 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:53.921 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.922 [2024-12-06 23:50:05.291007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.922 "name": "Existed_Raid", 00:15:53.922 "uuid": "617426ed-1afc-4b1d-948e-b74703aa1872", 00:15:53.922 "strip_size_kb": 64, 00:15:53.922 "state": "online", 00:15:53.922 "raid_level": "raid5f", 00:15:53.922 "superblock": true, 00:15:53.922 "num_base_bdevs": 4, 00:15:53.922 "num_base_bdevs_discovered": 3, 00:15:53.922 "num_base_bdevs_operational": 3, 00:15:53.922 "base_bdevs_list": [ 00:15:53.922 { 00:15:53.922 "name": null, 00:15:53.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.922 "is_configured": false, 00:15:53.922 "data_offset": 0, 00:15:53.922 "data_size": 63488 00:15:53.922 }, 00:15:53.922 { 00:15:53.922 "name": "BaseBdev2", 00:15:53.922 "uuid": "b23f4d29-403c-49aa-9ced-4edfe9c3a339", 00:15:53.922 "is_configured": true, 00:15:53.922 "data_offset": 2048, 00:15:53.922 "data_size": 63488 00:15:53.922 }, 00:15:53.922 { 00:15:53.922 "name": "BaseBdev3", 00:15:53.922 "uuid": "adcf5de4-4774-4056-98fd-123bffd1b39b", 00:15:53.922 "is_configured": true, 00:15:53.922 "data_offset": 2048, 00:15:53.922 "data_size": 63488 00:15:53.922 }, 00:15:53.922 { 00:15:53.922 "name": "BaseBdev4", 00:15:53.922 "uuid": "b2447f0e-382b-44c0-a8b8-42454585adea", 00:15:53.922 "is_configured": true, 00:15:53.922 "data_offset": 2048, 00:15:53.922 "data_size": 63488 00:15:53.922 } 00:15:53.922 ] 00:15:53.922 }' 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.922 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.490 [2024-12-06 23:50:05.863807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.490 [2024-12-06 23:50:05.863963] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.490 [2024-12-06 23:50:05.955135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.490 23:50:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.490 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:54.490 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.490 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:54.490 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.490 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.490 [2024-12-06 23:50:06.015042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.750 [2024-12-06 23:50:06.164239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:54.750 [2024-12-06 23:50:06.164345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:54.750 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.011 BaseBdev2 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.011 [ 00:15:55.011 { 00:15:55.011 "name": "BaseBdev2", 00:15:55.011 "aliases": [ 00:15:55.011 "6bbdcaba-886d-44d9-aa13-ca02ebb01986" 00:15:55.011 ], 00:15:55.011 "product_name": "Malloc disk", 00:15:55.011 "block_size": 512, 00:15:55.011 "num_blocks": 65536, 00:15:55.011 "uuid": "6bbdcaba-886d-44d9-aa13-ca02ebb01986", 00:15:55.011 "assigned_rate_limits": { 00:15:55.011 "rw_ios_per_sec": 0, 00:15:55.011 "rw_mbytes_per_sec": 0, 00:15:55.011 "r_mbytes_per_sec": 0, 00:15:55.011 "w_mbytes_per_sec": 0 00:15:55.011 }, 00:15:55.011 "claimed": false, 00:15:55.011 "zoned": false, 00:15:55.011 "supported_io_types": { 00:15:55.011 "read": true, 00:15:55.011 "write": true, 00:15:55.011 "unmap": true, 00:15:55.011 "flush": true, 00:15:55.011 "reset": true, 00:15:55.011 "nvme_admin": false, 00:15:55.011 "nvme_io": false, 00:15:55.011 "nvme_io_md": false, 00:15:55.011 "write_zeroes": true, 00:15:55.011 "zcopy": true, 00:15:55.011 "get_zone_info": false, 00:15:55.011 "zone_management": false, 00:15:55.011 "zone_append": false, 00:15:55.011 "compare": false, 00:15:55.011 "compare_and_write": false, 00:15:55.011 "abort": true, 00:15:55.011 "seek_hole": false, 00:15:55.011 "seek_data": false, 00:15:55.011 "copy": true, 00:15:55.011 "nvme_iov_md": false 00:15:55.011 }, 00:15:55.011 "memory_domains": [ 00:15:55.011 { 00:15:55.011 "dma_device_id": "system", 00:15:55.011 "dma_device_type": 1 00:15:55.011 }, 00:15:55.011 { 00:15:55.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.011 "dma_device_type": 2 00:15:55.011 } 00:15:55.011 ], 00:15:55.011 "driver_specific": {} 00:15:55.011 } 00:15:55.011 ] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.011 BaseBdev3 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.011 [ 00:15:55.011 { 00:15:55.011 "name": "BaseBdev3", 00:15:55.011 "aliases": [ 00:15:55.011 "53384853-f92f-4b53-904e-c43444dd5ee9" 00:15:55.011 ], 00:15:55.011 "product_name": "Malloc disk", 00:15:55.011 "block_size": 512, 00:15:55.011 "num_blocks": 65536, 00:15:55.011 "uuid": "53384853-f92f-4b53-904e-c43444dd5ee9", 00:15:55.011 "assigned_rate_limits": { 00:15:55.011 "rw_ios_per_sec": 0, 00:15:55.011 "rw_mbytes_per_sec": 0, 00:15:55.011 "r_mbytes_per_sec": 0, 00:15:55.011 "w_mbytes_per_sec": 0 00:15:55.011 }, 00:15:55.011 "claimed": false, 00:15:55.011 "zoned": false, 00:15:55.011 "supported_io_types": { 00:15:55.011 "read": true, 00:15:55.011 "write": true, 00:15:55.011 "unmap": true, 00:15:55.011 "flush": true, 00:15:55.011 "reset": true, 00:15:55.011 "nvme_admin": false, 00:15:55.011 "nvme_io": false, 00:15:55.011 "nvme_io_md": false, 00:15:55.011 "write_zeroes": true, 00:15:55.011 "zcopy": true, 00:15:55.011 "get_zone_info": false, 00:15:55.011 "zone_management": false, 00:15:55.011 "zone_append": false, 00:15:55.011 "compare": false, 00:15:55.011 "compare_and_write": false, 00:15:55.011 "abort": true, 00:15:55.011 "seek_hole": false, 00:15:55.011 "seek_data": false, 00:15:55.011 "copy": true, 00:15:55.011 "nvme_iov_md": false 00:15:55.011 }, 00:15:55.011 "memory_domains": [ 00:15:55.011 { 00:15:55.011 "dma_device_id": "system", 00:15:55.011 "dma_device_type": 1 00:15:55.011 }, 00:15:55.011 { 00:15:55.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.011 "dma_device_type": 2 00:15:55.011 } 00:15:55.011 ], 00:15:55.011 "driver_specific": {} 00:15:55.011 } 00:15:55.011 ] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.011 BaseBdev4 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.011 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.011 [ 00:15:55.011 { 00:15:55.011 "name": "BaseBdev4", 00:15:55.011 "aliases": [ 00:15:55.011 "957be71a-fdff-495c-8f04-72d49c1306a8" 00:15:55.011 ], 00:15:55.012 "product_name": "Malloc disk", 00:15:55.012 "block_size": 512, 00:15:55.012 "num_blocks": 65536, 00:15:55.012 "uuid": "957be71a-fdff-495c-8f04-72d49c1306a8", 00:15:55.012 "assigned_rate_limits": { 00:15:55.012 "rw_ios_per_sec": 0, 00:15:55.012 "rw_mbytes_per_sec": 0, 00:15:55.012 "r_mbytes_per_sec": 0, 00:15:55.012 "w_mbytes_per_sec": 0 00:15:55.012 }, 00:15:55.012 "claimed": false, 00:15:55.012 "zoned": false, 00:15:55.012 "supported_io_types": { 00:15:55.012 "read": true, 00:15:55.012 "write": true, 00:15:55.012 "unmap": true, 00:15:55.012 "flush": true, 00:15:55.012 "reset": true, 00:15:55.012 "nvme_admin": false, 00:15:55.012 "nvme_io": false, 00:15:55.012 "nvme_io_md": false, 00:15:55.012 "write_zeroes": true, 00:15:55.012 "zcopy": true, 00:15:55.012 "get_zone_info": false, 00:15:55.012 "zone_management": false, 00:15:55.012 "zone_append": false, 00:15:55.012 "compare": false, 00:15:55.012 "compare_and_write": false, 00:15:55.012 "abort": true, 00:15:55.012 "seek_hole": false, 00:15:55.012 "seek_data": false, 00:15:55.012 "copy": true, 00:15:55.012 "nvme_iov_md": false 00:15:55.012 }, 00:15:55.012 "memory_domains": [ 00:15:55.012 { 00:15:55.012 "dma_device_id": "system", 00:15:55.012 "dma_device_type": 1 00:15:55.012 }, 00:15:55.012 { 00:15:55.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.012 "dma_device_type": 2 00:15:55.012 } 00:15:55.012 ], 00:15:55.012 "driver_specific": {} 00:15:55.012 } 00:15:55.012 ] 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.012 [2024-12-06 23:50:06.549770] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.012 [2024-12-06 23:50:06.549812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.012 [2024-12-06 23:50:06.549846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.012 [2024-12-06 23:50:06.551551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.012 [2024-12-06 23:50:06.551604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.012 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.272 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.272 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.272 "name": "Existed_Raid", 00:15:55.272 "uuid": "57802d35-0f62-456d-9100-b5a0ef5f85e6", 00:15:55.272 "strip_size_kb": 64, 00:15:55.272 "state": "configuring", 00:15:55.272 "raid_level": "raid5f", 00:15:55.272 "superblock": true, 00:15:55.272 "num_base_bdevs": 4, 00:15:55.272 "num_base_bdevs_discovered": 3, 00:15:55.272 "num_base_bdevs_operational": 4, 00:15:55.272 "base_bdevs_list": [ 00:15:55.272 { 00:15:55.272 "name": "BaseBdev1", 00:15:55.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.272 "is_configured": false, 00:15:55.272 "data_offset": 0, 00:15:55.272 "data_size": 0 00:15:55.272 }, 00:15:55.272 { 00:15:55.272 "name": "BaseBdev2", 00:15:55.272 "uuid": "6bbdcaba-886d-44d9-aa13-ca02ebb01986", 00:15:55.272 "is_configured": true, 00:15:55.272 "data_offset": 2048, 00:15:55.272 "data_size": 63488 00:15:55.272 }, 00:15:55.272 { 00:15:55.272 "name": "BaseBdev3", 00:15:55.272 "uuid": "53384853-f92f-4b53-904e-c43444dd5ee9", 00:15:55.272 "is_configured": true, 00:15:55.272 "data_offset": 2048, 00:15:55.272 "data_size": 63488 00:15:55.272 }, 00:15:55.272 { 00:15:55.272 "name": "BaseBdev4", 00:15:55.272 "uuid": "957be71a-fdff-495c-8f04-72d49c1306a8", 00:15:55.272 "is_configured": true, 00:15:55.272 "data_offset": 2048, 00:15:55.272 "data_size": 63488 00:15:55.272 } 00:15:55.272 ] 00:15:55.272 }' 00:15:55.272 23:50:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.272 23:50:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.533 [2024-12-06 23:50:07.032910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.533 "name": "Existed_Raid", 00:15:55.533 "uuid": "57802d35-0f62-456d-9100-b5a0ef5f85e6", 00:15:55.533 "strip_size_kb": 64, 00:15:55.533 "state": "configuring", 00:15:55.533 "raid_level": "raid5f", 00:15:55.533 "superblock": true, 00:15:55.533 "num_base_bdevs": 4, 00:15:55.533 "num_base_bdevs_discovered": 2, 00:15:55.533 "num_base_bdevs_operational": 4, 00:15:55.533 "base_bdevs_list": [ 00:15:55.533 { 00:15:55.533 "name": "BaseBdev1", 00:15:55.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.533 "is_configured": false, 00:15:55.533 "data_offset": 0, 00:15:55.533 "data_size": 0 00:15:55.533 }, 00:15:55.533 { 00:15:55.533 "name": null, 00:15:55.533 "uuid": "6bbdcaba-886d-44d9-aa13-ca02ebb01986", 00:15:55.533 "is_configured": false, 00:15:55.533 "data_offset": 0, 00:15:55.533 "data_size": 63488 00:15:55.533 }, 00:15:55.533 { 00:15:55.533 "name": "BaseBdev3", 00:15:55.533 "uuid": "53384853-f92f-4b53-904e-c43444dd5ee9", 00:15:55.533 "is_configured": true, 00:15:55.533 "data_offset": 2048, 00:15:55.533 "data_size": 63488 00:15:55.533 }, 00:15:55.533 { 00:15:55.533 "name": "BaseBdev4", 00:15:55.533 "uuid": "957be71a-fdff-495c-8f04-72d49c1306a8", 00:15:55.533 "is_configured": true, 00:15:55.533 "data_offset": 2048, 00:15:55.533 "data_size": 63488 00:15:55.533 } 00:15:55.533 ] 00:15:55.533 }' 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.533 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.125 [2024-12-06 23:50:07.561833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.125 BaseBdev1 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.125 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.125 [ 00:15:56.126 { 00:15:56.126 "name": "BaseBdev1", 00:15:56.126 "aliases": [ 00:15:56.126 "4360d6d1-3b94-4d33-8c0e-98b7b5305161" 00:15:56.126 ], 00:15:56.126 "product_name": "Malloc disk", 00:15:56.126 "block_size": 512, 00:15:56.126 "num_blocks": 65536, 00:15:56.126 "uuid": "4360d6d1-3b94-4d33-8c0e-98b7b5305161", 00:15:56.126 "assigned_rate_limits": { 00:15:56.126 "rw_ios_per_sec": 0, 00:15:56.126 "rw_mbytes_per_sec": 0, 00:15:56.126 "r_mbytes_per_sec": 0, 00:15:56.126 "w_mbytes_per_sec": 0 00:15:56.126 }, 00:15:56.126 "claimed": true, 00:15:56.126 "claim_type": "exclusive_write", 00:15:56.126 "zoned": false, 00:15:56.126 "supported_io_types": { 00:15:56.126 "read": true, 00:15:56.126 "write": true, 00:15:56.126 "unmap": true, 00:15:56.126 "flush": true, 00:15:56.126 "reset": true, 00:15:56.126 "nvme_admin": false, 00:15:56.126 "nvme_io": false, 00:15:56.126 "nvme_io_md": false, 00:15:56.126 "write_zeroes": true, 00:15:56.126 "zcopy": true, 00:15:56.126 "get_zone_info": false, 00:15:56.126 "zone_management": false, 00:15:56.126 "zone_append": false, 00:15:56.126 "compare": false, 00:15:56.126 "compare_and_write": false, 00:15:56.126 "abort": true, 00:15:56.126 "seek_hole": false, 00:15:56.126 "seek_data": false, 00:15:56.126 "copy": true, 00:15:56.126 "nvme_iov_md": false 00:15:56.126 }, 00:15:56.126 "memory_domains": [ 00:15:56.126 { 00:15:56.126 "dma_device_id": "system", 00:15:56.126 "dma_device_type": 1 00:15:56.126 }, 00:15:56.126 { 00:15:56.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.126 "dma_device_type": 2 00:15:56.126 } 00:15:56.126 ], 00:15:56.126 "driver_specific": {} 00:15:56.126 } 00:15:56.126 ] 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.126 "name": "Existed_Raid", 00:15:56.126 "uuid": "57802d35-0f62-456d-9100-b5a0ef5f85e6", 00:15:56.126 "strip_size_kb": 64, 00:15:56.126 "state": "configuring", 00:15:56.126 "raid_level": "raid5f", 00:15:56.126 "superblock": true, 00:15:56.126 "num_base_bdevs": 4, 00:15:56.126 "num_base_bdevs_discovered": 3, 00:15:56.126 "num_base_bdevs_operational": 4, 00:15:56.126 "base_bdevs_list": [ 00:15:56.126 { 00:15:56.126 "name": "BaseBdev1", 00:15:56.126 "uuid": "4360d6d1-3b94-4d33-8c0e-98b7b5305161", 00:15:56.126 "is_configured": true, 00:15:56.126 "data_offset": 2048, 00:15:56.126 "data_size": 63488 00:15:56.126 }, 00:15:56.126 { 00:15:56.126 "name": null, 00:15:56.126 "uuid": "6bbdcaba-886d-44d9-aa13-ca02ebb01986", 00:15:56.126 "is_configured": false, 00:15:56.126 "data_offset": 0, 00:15:56.126 "data_size": 63488 00:15:56.126 }, 00:15:56.126 { 00:15:56.126 "name": "BaseBdev3", 00:15:56.126 "uuid": "53384853-f92f-4b53-904e-c43444dd5ee9", 00:15:56.126 "is_configured": true, 00:15:56.126 "data_offset": 2048, 00:15:56.126 "data_size": 63488 00:15:56.126 }, 00:15:56.126 { 00:15:56.126 "name": "BaseBdev4", 00:15:56.126 "uuid": "957be71a-fdff-495c-8f04-72d49c1306a8", 00:15:56.126 "is_configured": true, 00:15:56.126 "data_offset": 2048, 00:15:56.126 "data_size": 63488 00:15:56.126 } 00:15:56.126 ] 00:15:56.126 }' 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.126 23:50:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.737 [2024-12-06 23:50:08.057038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.737 "name": "Existed_Raid", 00:15:56.737 "uuid": "57802d35-0f62-456d-9100-b5a0ef5f85e6", 00:15:56.737 "strip_size_kb": 64, 00:15:56.737 "state": "configuring", 00:15:56.737 "raid_level": "raid5f", 00:15:56.737 "superblock": true, 00:15:56.737 "num_base_bdevs": 4, 00:15:56.737 "num_base_bdevs_discovered": 2, 00:15:56.737 "num_base_bdevs_operational": 4, 00:15:56.737 "base_bdevs_list": [ 00:15:56.737 { 00:15:56.737 "name": "BaseBdev1", 00:15:56.737 "uuid": "4360d6d1-3b94-4d33-8c0e-98b7b5305161", 00:15:56.737 "is_configured": true, 00:15:56.737 "data_offset": 2048, 00:15:56.737 "data_size": 63488 00:15:56.737 }, 00:15:56.737 { 00:15:56.737 "name": null, 00:15:56.737 "uuid": "6bbdcaba-886d-44d9-aa13-ca02ebb01986", 00:15:56.737 "is_configured": false, 00:15:56.737 "data_offset": 0, 00:15:56.737 "data_size": 63488 00:15:56.737 }, 00:15:56.737 { 00:15:56.737 "name": null, 00:15:56.737 "uuid": "53384853-f92f-4b53-904e-c43444dd5ee9", 00:15:56.737 "is_configured": false, 00:15:56.737 "data_offset": 0, 00:15:56.737 "data_size": 63488 00:15:56.737 }, 00:15:56.737 { 00:15:56.737 "name": "BaseBdev4", 00:15:56.737 "uuid": "957be71a-fdff-495c-8f04-72d49c1306a8", 00:15:56.737 "is_configured": true, 00:15:56.737 "data_offset": 2048, 00:15:56.737 "data_size": 63488 00:15:56.737 } 00:15:56.737 ] 00:15:56.737 }' 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.737 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.005 [2024-12-06 23:50:08.532212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.005 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.265 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.265 "name": "Existed_Raid", 00:15:57.265 "uuid": "57802d35-0f62-456d-9100-b5a0ef5f85e6", 00:15:57.265 "strip_size_kb": 64, 00:15:57.265 "state": "configuring", 00:15:57.265 "raid_level": "raid5f", 00:15:57.265 "superblock": true, 00:15:57.265 "num_base_bdevs": 4, 00:15:57.265 "num_base_bdevs_discovered": 3, 00:15:57.265 "num_base_bdevs_operational": 4, 00:15:57.265 "base_bdevs_list": [ 00:15:57.265 { 00:15:57.265 "name": "BaseBdev1", 00:15:57.265 "uuid": "4360d6d1-3b94-4d33-8c0e-98b7b5305161", 00:15:57.265 "is_configured": true, 00:15:57.265 "data_offset": 2048, 00:15:57.265 "data_size": 63488 00:15:57.265 }, 00:15:57.265 { 00:15:57.265 "name": null, 00:15:57.265 "uuid": "6bbdcaba-886d-44d9-aa13-ca02ebb01986", 00:15:57.265 "is_configured": false, 00:15:57.265 "data_offset": 0, 00:15:57.265 "data_size": 63488 00:15:57.265 }, 00:15:57.265 { 00:15:57.265 "name": "BaseBdev3", 00:15:57.265 "uuid": "53384853-f92f-4b53-904e-c43444dd5ee9", 00:15:57.265 "is_configured": true, 00:15:57.265 "data_offset": 2048, 00:15:57.265 "data_size": 63488 00:15:57.265 }, 00:15:57.265 { 00:15:57.265 "name": "BaseBdev4", 00:15:57.265 "uuid": "957be71a-fdff-495c-8f04-72d49c1306a8", 00:15:57.265 "is_configured": true, 00:15:57.265 "data_offset": 2048, 00:15:57.265 "data_size": 63488 00:15:57.265 } 00:15:57.265 ] 00:15:57.265 }' 00:15:57.265 23:50:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.265 23:50:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.525 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.525 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.525 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.525 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:57.525 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.525 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:57.525 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:57.525 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.525 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.525 [2024-12-06 23:50:09.059555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.785 "name": "Existed_Raid", 00:15:57.785 "uuid": "57802d35-0f62-456d-9100-b5a0ef5f85e6", 00:15:57.785 "strip_size_kb": 64, 00:15:57.785 "state": "configuring", 00:15:57.785 "raid_level": "raid5f", 00:15:57.785 "superblock": true, 00:15:57.785 "num_base_bdevs": 4, 00:15:57.785 "num_base_bdevs_discovered": 2, 00:15:57.785 "num_base_bdevs_operational": 4, 00:15:57.785 "base_bdevs_list": [ 00:15:57.785 { 00:15:57.785 "name": null, 00:15:57.785 "uuid": "4360d6d1-3b94-4d33-8c0e-98b7b5305161", 00:15:57.785 "is_configured": false, 00:15:57.785 "data_offset": 0, 00:15:57.785 "data_size": 63488 00:15:57.785 }, 00:15:57.785 { 00:15:57.785 "name": null, 00:15:57.785 "uuid": "6bbdcaba-886d-44d9-aa13-ca02ebb01986", 00:15:57.785 "is_configured": false, 00:15:57.785 "data_offset": 0, 00:15:57.785 "data_size": 63488 00:15:57.785 }, 00:15:57.785 { 00:15:57.785 "name": "BaseBdev3", 00:15:57.785 "uuid": "53384853-f92f-4b53-904e-c43444dd5ee9", 00:15:57.785 "is_configured": true, 00:15:57.785 "data_offset": 2048, 00:15:57.785 "data_size": 63488 00:15:57.785 }, 00:15:57.785 { 00:15:57.785 "name": "BaseBdev4", 00:15:57.785 "uuid": "957be71a-fdff-495c-8f04-72d49c1306a8", 00:15:57.785 "is_configured": true, 00:15:57.785 "data_offset": 2048, 00:15:57.785 "data_size": 63488 00:15:57.785 } 00:15:57.785 ] 00:15:57.785 }' 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.785 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.045 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.045 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.045 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.045 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:58.045 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.306 [2024-12-06 23:50:09.633758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.306 "name": "Existed_Raid", 00:15:58.306 "uuid": "57802d35-0f62-456d-9100-b5a0ef5f85e6", 00:15:58.306 "strip_size_kb": 64, 00:15:58.306 "state": "configuring", 00:15:58.306 "raid_level": "raid5f", 00:15:58.306 "superblock": true, 00:15:58.306 "num_base_bdevs": 4, 00:15:58.306 "num_base_bdevs_discovered": 3, 00:15:58.306 "num_base_bdevs_operational": 4, 00:15:58.306 "base_bdevs_list": [ 00:15:58.306 { 00:15:58.306 "name": null, 00:15:58.306 "uuid": "4360d6d1-3b94-4d33-8c0e-98b7b5305161", 00:15:58.306 "is_configured": false, 00:15:58.306 "data_offset": 0, 00:15:58.306 "data_size": 63488 00:15:58.306 }, 00:15:58.306 { 00:15:58.306 "name": "BaseBdev2", 00:15:58.306 "uuid": "6bbdcaba-886d-44d9-aa13-ca02ebb01986", 00:15:58.306 "is_configured": true, 00:15:58.306 "data_offset": 2048, 00:15:58.306 "data_size": 63488 00:15:58.306 }, 00:15:58.306 { 00:15:58.306 "name": "BaseBdev3", 00:15:58.306 "uuid": "53384853-f92f-4b53-904e-c43444dd5ee9", 00:15:58.306 "is_configured": true, 00:15:58.306 "data_offset": 2048, 00:15:58.306 "data_size": 63488 00:15:58.306 }, 00:15:58.306 { 00:15:58.306 "name": "BaseBdev4", 00:15:58.306 "uuid": "957be71a-fdff-495c-8f04-72d49c1306a8", 00:15:58.306 "is_configured": true, 00:15:58.306 "data_offset": 2048, 00:15:58.306 "data_size": 63488 00:15:58.306 } 00:15:58.306 ] 00:15:58.306 }' 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.306 23:50:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4360d6d1-3b94-4d33-8c0e-98b7b5305161 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.566 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.826 [2024-12-06 23:50:10.151970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:58.827 [2024-12-06 23:50:10.152278] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:58.827 [2024-12-06 23:50:10.152327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:58.827 [2024-12-06 23:50:10.152590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:58.827 NewBaseBdev 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 [2024-12-06 23:50:10.159253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:58.827 [2024-12-06 23:50:10.159310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:58.827 [2024-12-06 23:50:10.159504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 [ 00:15:58.827 { 00:15:58.827 "name": "NewBaseBdev", 00:15:58.827 "aliases": [ 00:15:58.827 "4360d6d1-3b94-4d33-8c0e-98b7b5305161" 00:15:58.827 ], 00:15:58.827 "product_name": "Malloc disk", 00:15:58.827 "block_size": 512, 00:15:58.827 "num_blocks": 65536, 00:15:58.827 "uuid": "4360d6d1-3b94-4d33-8c0e-98b7b5305161", 00:15:58.827 "assigned_rate_limits": { 00:15:58.827 "rw_ios_per_sec": 0, 00:15:58.827 "rw_mbytes_per_sec": 0, 00:15:58.827 "r_mbytes_per_sec": 0, 00:15:58.827 "w_mbytes_per_sec": 0 00:15:58.827 }, 00:15:58.827 "claimed": true, 00:15:58.827 "claim_type": "exclusive_write", 00:15:58.827 "zoned": false, 00:15:58.827 "supported_io_types": { 00:15:58.827 "read": true, 00:15:58.827 "write": true, 00:15:58.827 "unmap": true, 00:15:58.827 "flush": true, 00:15:58.827 "reset": true, 00:15:58.827 "nvme_admin": false, 00:15:58.827 "nvme_io": false, 00:15:58.827 "nvme_io_md": false, 00:15:58.827 "write_zeroes": true, 00:15:58.827 "zcopy": true, 00:15:58.827 "get_zone_info": false, 00:15:58.827 "zone_management": false, 00:15:58.827 "zone_append": false, 00:15:58.827 "compare": false, 00:15:58.827 "compare_and_write": false, 00:15:58.827 "abort": true, 00:15:58.827 "seek_hole": false, 00:15:58.827 "seek_data": false, 00:15:58.827 "copy": true, 00:15:58.827 "nvme_iov_md": false 00:15:58.827 }, 00:15:58.827 "memory_domains": [ 00:15:58.827 { 00:15:58.827 "dma_device_id": "system", 00:15:58.827 "dma_device_type": 1 00:15:58.827 }, 00:15:58.827 { 00:15:58.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.827 "dma_device_type": 2 00:15:58.827 } 00:15:58.827 ], 00:15:58.827 "driver_specific": {} 00:15:58.827 } 00:15:58.827 ] 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.827 "name": "Existed_Raid", 00:15:58.827 "uuid": "57802d35-0f62-456d-9100-b5a0ef5f85e6", 00:15:58.827 "strip_size_kb": 64, 00:15:58.827 "state": "online", 00:15:58.827 "raid_level": "raid5f", 00:15:58.827 "superblock": true, 00:15:58.827 "num_base_bdevs": 4, 00:15:58.827 "num_base_bdevs_discovered": 4, 00:15:58.827 "num_base_bdevs_operational": 4, 00:15:58.827 "base_bdevs_list": [ 00:15:58.827 { 00:15:58.827 "name": "NewBaseBdev", 00:15:58.827 "uuid": "4360d6d1-3b94-4d33-8c0e-98b7b5305161", 00:15:58.827 "is_configured": true, 00:15:58.827 "data_offset": 2048, 00:15:58.827 "data_size": 63488 00:15:58.827 }, 00:15:58.827 { 00:15:58.827 "name": "BaseBdev2", 00:15:58.827 "uuid": "6bbdcaba-886d-44d9-aa13-ca02ebb01986", 00:15:58.827 "is_configured": true, 00:15:58.827 "data_offset": 2048, 00:15:58.827 "data_size": 63488 00:15:58.827 }, 00:15:58.827 { 00:15:58.827 "name": "BaseBdev3", 00:15:58.827 "uuid": "53384853-f92f-4b53-904e-c43444dd5ee9", 00:15:58.827 "is_configured": true, 00:15:58.827 "data_offset": 2048, 00:15:58.827 "data_size": 63488 00:15:58.827 }, 00:15:58.827 { 00:15:58.827 "name": "BaseBdev4", 00:15:58.827 "uuid": "957be71a-fdff-495c-8f04-72d49c1306a8", 00:15:58.827 "is_configured": true, 00:15:58.827 "data_offset": 2048, 00:15:58.827 "data_size": 63488 00:15:58.827 } 00:15:58.827 ] 00:15:58.827 }' 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.827 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.088 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:59.088 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:59.088 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.088 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.088 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.088 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.348 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.348 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:59.348 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.348 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.348 [2024-12-06 23:50:10.658697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.348 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.348 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.348 "name": "Existed_Raid", 00:15:59.348 "aliases": [ 00:15:59.348 "57802d35-0f62-456d-9100-b5a0ef5f85e6" 00:15:59.348 ], 00:15:59.348 "product_name": "Raid Volume", 00:15:59.348 "block_size": 512, 00:15:59.348 "num_blocks": 190464, 00:15:59.348 "uuid": "57802d35-0f62-456d-9100-b5a0ef5f85e6", 00:15:59.348 "assigned_rate_limits": { 00:15:59.348 "rw_ios_per_sec": 0, 00:15:59.348 "rw_mbytes_per_sec": 0, 00:15:59.348 "r_mbytes_per_sec": 0, 00:15:59.348 "w_mbytes_per_sec": 0 00:15:59.348 }, 00:15:59.348 "claimed": false, 00:15:59.348 "zoned": false, 00:15:59.348 "supported_io_types": { 00:15:59.348 "read": true, 00:15:59.348 "write": true, 00:15:59.348 "unmap": false, 00:15:59.348 "flush": false, 00:15:59.348 "reset": true, 00:15:59.348 "nvme_admin": false, 00:15:59.348 "nvme_io": false, 00:15:59.348 "nvme_io_md": false, 00:15:59.348 "write_zeroes": true, 00:15:59.348 "zcopy": false, 00:15:59.348 "get_zone_info": false, 00:15:59.348 "zone_management": false, 00:15:59.348 "zone_append": false, 00:15:59.348 "compare": false, 00:15:59.348 "compare_and_write": false, 00:15:59.349 "abort": false, 00:15:59.349 "seek_hole": false, 00:15:59.349 "seek_data": false, 00:15:59.349 "copy": false, 00:15:59.349 "nvme_iov_md": false 00:15:59.349 }, 00:15:59.349 "driver_specific": { 00:15:59.349 "raid": { 00:15:59.349 "uuid": "57802d35-0f62-456d-9100-b5a0ef5f85e6", 00:15:59.349 "strip_size_kb": 64, 00:15:59.349 "state": "online", 00:15:59.349 "raid_level": "raid5f", 00:15:59.349 "superblock": true, 00:15:59.349 "num_base_bdevs": 4, 00:15:59.349 "num_base_bdevs_discovered": 4, 00:15:59.349 "num_base_bdevs_operational": 4, 00:15:59.349 "base_bdevs_list": [ 00:15:59.349 { 00:15:59.349 "name": "NewBaseBdev", 00:15:59.349 "uuid": "4360d6d1-3b94-4d33-8c0e-98b7b5305161", 00:15:59.349 "is_configured": true, 00:15:59.349 "data_offset": 2048, 00:15:59.349 "data_size": 63488 00:15:59.349 }, 00:15:59.349 { 00:15:59.349 "name": "BaseBdev2", 00:15:59.349 "uuid": "6bbdcaba-886d-44d9-aa13-ca02ebb01986", 00:15:59.349 "is_configured": true, 00:15:59.349 "data_offset": 2048, 00:15:59.349 "data_size": 63488 00:15:59.349 }, 00:15:59.349 { 00:15:59.349 "name": "BaseBdev3", 00:15:59.349 "uuid": "53384853-f92f-4b53-904e-c43444dd5ee9", 00:15:59.349 "is_configured": true, 00:15:59.349 "data_offset": 2048, 00:15:59.349 "data_size": 63488 00:15:59.349 }, 00:15:59.349 { 00:15:59.349 "name": "BaseBdev4", 00:15:59.349 "uuid": "957be71a-fdff-495c-8f04-72d49c1306a8", 00:15:59.349 "is_configured": true, 00:15:59.349 "data_offset": 2048, 00:15:59.349 "data_size": 63488 00:15:59.349 } 00:15:59.349 ] 00:15:59.349 } 00:15:59.349 } 00:15:59.349 }' 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:59.349 BaseBdev2 00:15:59.349 BaseBdev3 00:15:59.349 BaseBdev4' 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.349 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.609 [2024-12-06 23:50:10.985924] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.609 [2024-12-06 23:50:10.985952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.609 [2024-12-06 23:50:10.986012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.609 [2024-12-06 23:50:10.986299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.609 [2024-12-06 23:50:10.986317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83344 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83344 ']' 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83344 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:59.609 23:50:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.609 23:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83344 00:15:59.609 killing process with pid 83344 00:15:59.609 23:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.609 23:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.609 23:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83344' 00:15:59.609 23:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83344 00:15:59.609 [2024-12-06 23:50:11.036169] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:59.609 23:50:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83344 00:15:59.869 [2024-12-06 23:50:11.402587] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.252 23:50:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:01.252 00:16:01.252 real 0m11.644s 00:16:01.252 user 0m18.616s 00:16:01.252 sys 0m2.171s 00:16:01.252 23:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.252 ************************************ 00:16:01.252 23:50:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.252 END TEST raid5f_state_function_test_sb 00:16:01.252 ************************************ 00:16:01.252 23:50:12 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:01.252 23:50:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:01.252 23:50:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.252 23:50:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.252 ************************************ 00:16:01.252 START TEST raid5f_superblock_test 00:16:01.252 ************************************ 00:16:01.252 23:50:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:01.252 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:01.252 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:01.252 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:01.252 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:01.252 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84020 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84020 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84020 ']' 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.253 23:50:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.253 [2024-12-06 23:50:12.636466] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:16:01.253 [2024-12-06 23:50:12.636580] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84020 ] 00:16:01.253 [2024-12-06 23:50:12.808925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.512 [2024-12-06 23:50:12.910693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.772 [2024-12-06 23:50:13.084759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.772 [2024-12-06 23:50:13.084799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.033 malloc1 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.033 [2024-12-06 23:50:13.490565] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.033 [2024-12-06 23:50:13.490622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.033 [2024-12-06 23:50:13.490643] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:02.033 [2024-12-06 23:50:13.490651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.033 [2024-12-06 23:50:13.492635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.033 [2024-12-06 23:50:13.492680] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.033 pt1 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.033 malloc2 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.033 [2024-12-06 23:50:13.542418] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:02.033 [2024-12-06 23:50:13.542467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.033 [2024-12-06 23:50:13.542490] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:02.033 [2024-12-06 23:50:13.542498] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.033 [2024-12-06 23:50:13.544457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.033 [2024-12-06 23:50:13.544491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:02.033 pt2 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.033 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.034 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.034 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:02.034 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:02.034 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:02.034 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.034 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.034 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.034 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:02.034 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.034 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.294 malloc3 00:16:02.294 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.294 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:02.294 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.294 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.295 [2024-12-06 23:50:13.626249] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:02.295 [2024-12-06 23:50:13.626293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.295 [2024-12-06 23:50:13.626312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:02.295 [2024-12-06 23:50:13.626320] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.295 [2024-12-06 23:50:13.628303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.295 [2024-12-06 23:50:13.628337] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:02.295 pt3 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.295 malloc4 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.295 [2024-12-06 23:50:13.675607] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:02.295 [2024-12-06 23:50:13.675653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.295 [2024-12-06 23:50:13.675681] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:02.295 [2024-12-06 23:50:13.675705] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.295 [2024-12-06 23:50:13.677912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.295 [2024-12-06 23:50:13.677944] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:02.295 pt4 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.295 [2024-12-06 23:50:13.687623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.295 [2024-12-06 23:50:13.689371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.295 [2024-12-06 23:50:13.689472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:02.295 [2024-12-06 23:50:13.689519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:02.295 [2024-12-06 23:50:13.689710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:02.295 [2024-12-06 23:50:13.689749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:02.295 [2024-12-06 23:50:13.689976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:02.295 [2024-12-06 23:50:13.696942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:02.295 [2024-12-06 23:50:13.696969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:02.295 [2024-12-06 23:50:13.697157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.295 "name": "raid_bdev1", 00:16:02.295 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:02.295 "strip_size_kb": 64, 00:16:02.295 "state": "online", 00:16:02.295 "raid_level": "raid5f", 00:16:02.295 "superblock": true, 00:16:02.295 "num_base_bdevs": 4, 00:16:02.295 "num_base_bdevs_discovered": 4, 00:16:02.295 "num_base_bdevs_operational": 4, 00:16:02.295 "base_bdevs_list": [ 00:16:02.295 { 00:16:02.295 "name": "pt1", 00:16:02.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.295 "is_configured": true, 00:16:02.295 "data_offset": 2048, 00:16:02.295 "data_size": 63488 00:16:02.295 }, 00:16:02.295 { 00:16:02.295 "name": "pt2", 00:16:02.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.295 "is_configured": true, 00:16:02.295 "data_offset": 2048, 00:16:02.295 "data_size": 63488 00:16:02.295 }, 00:16:02.295 { 00:16:02.295 "name": "pt3", 00:16:02.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.295 "is_configured": true, 00:16:02.295 "data_offset": 2048, 00:16:02.295 "data_size": 63488 00:16:02.295 }, 00:16:02.295 { 00:16:02.295 "name": "pt4", 00:16:02.295 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:02.295 "is_configured": true, 00:16:02.295 "data_offset": 2048, 00:16:02.295 "data_size": 63488 00:16:02.295 } 00:16:02.295 ] 00:16:02.295 }' 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.295 23:50:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.866 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:02.866 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:02.866 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:02.866 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:02.866 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:02.866 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:02.866 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.866 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.866 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.866 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:02.866 [2024-12-06 23:50:14.140931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.866 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.866 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:02.866 "name": "raid_bdev1", 00:16:02.866 "aliases": [ 00:16:02.866 "df0863e6-6d39-40f2-ba6a-c6363a321e68" 00:16:02.866 ], 00:16:02.866 "product_name": "Raid Volume", 00:16:02.866 "block_size": 512, 00:16:02.866 "num_blocks": 190464, 00:16:02.866 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:02.866 "assigned_rate_limits": { 00:16:02.866 "rw_ios_per_sec": 0, 00:16:02.866 "rw_mbytes_per_sec": 0, 00:16:02.866 "r_mbytes_per_sec": 0, 00:16:02.866 "w_mbytes_per_sec": 0 00:16:02.866 }, 00:16:02.866 "claimed": false, 00:16:02.866 "zoned": false, 00:16:02.866 "supported_io_types": { 00:16:02.866 "read": true, 00:16:02.866 "write": true, 00:16:02.866 "unmap": false, 00:16:02.866 "flush": false, 00:16:02.867 "reset": true, 00:16:02.867 "nvme_admin": false, 00:16:02.867 "nvme_io": false, 00:16:02.867 "nvme_io_md": false, 00:16:02.867 "write_zeroes": true, 00:16:02.867 "zcopy": false, 00:16:02.867 "get_zone_info": false, 00:16:02.867 "zone_management": false, 00:16:02.867 "zone_append": false, 00:16:02.867 "compare": false, 00:16:02.867 "compare_and_write": false, 00:16:02.867 "abort": false, 00:16:02.867 "seek_hole": false, 00:16:02.867 "seek_data": false, 00:16:02.867 "copy": false, 00:16:02.867 "nvme_iov_md": false 00:16:02.867 }, 00:16:02.867 "driver_specific": { 00:16:02.867 "raid": { 00:16:02.867 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:02.867 "strip_size_kb": 64, 00:16:02.867 "state": "online", 00:16:02.867 "raid_level": "raid5f", 00:16:02.867 "superblock": true, 00:16:02.867 "num_base_bdevs": 4, 00:16:02.867 "num_base_bdevs_discovered": 4, 00:16:02.867 "num_base_bdevs_operational": 4, 00:16:02.867 "base_bdevs_list": [ 00:16:02.867 { 00:16:02.867 "name": "pt1", 00:16:02.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.867 "is_configured": true, 00:16:02.867 "data_offset": 2048, 00:16:02.867 "data_size": 63488 00:16:02.867 }, 00:16:02.867 { 00:16:02.867 "name": "pt2", 00:16:02.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.867 "is_configured": true, 00:16:02.867 "data_offset": 2048, 00:16:02.867 "data_size": 63488 00:16:02.867 }, 00:16:02.867 { 00:16:02.867 "name": "pt3", 00:16:02.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.867 "is_configured": true, 00:16:02.867 "data_offset": 2048, 00:16:02.867 "data_size": 63488 00:16:02.867 }, 00:16:02.867 { 00:16:02.867 "name": "pt4", 00:16:02.867 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:02.867 "is_configured": true, 00:16:02.867 "data_offset": 2048, 00:16:02.867 "data_size": 63488 00:16:02.867 } 00:16:02.867 ] 00:16:02.867 } 00:16:02.867 } 00:16:02.867 }' 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:02.867 pt2 00:16:02.867 pt3 00:16:02.867 pt4' 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.867 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:03.128 [2024-12-06 23:50:14.472306] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=df0863e6-6d39-40f2-ba6a-c6363a321e68 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z df0863e6-6d39-40f2-ba6a-c6363a321e68 ']' 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.128 [2024-12-06 23:50:14.516083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.128 [2024-12-06 23:50:14.516108] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.128 [2024-12-06 23:50:14.516228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.128 [2024-12-06 23:50:14.516302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.128 [2024-12-06 23:50:14.516316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:03.128 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:03.129 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:03.129 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.129 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:03.129 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.129 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:03.129 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.129 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.129 [2024-12-06 23:50:14.683829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:03.129 [2024-12-06 23:50:14.685553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:03.129 [2024-12-06 23:50:14.685601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:03.129 [2024-12-06 23:50:14.685630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:03.129 [2024-12-06 23:50:14.685683] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:03.129 [2024-12-06 23:50:14.685718] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:03.129 [2024-12-06 23:50:14.685735] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:03.129 [2024-12-06 23:50:14.685752] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:03.129 [2024-12-06 23:50:14.685764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.129 [2024-12-06 23:50:14.685773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:03.129 request: 00:16:03.129 { 00:16:03.129 "name": "raid_bdev1", 00:16:03.129 "raid_level": "raid5f", 00:16:03.129 "base_bdevs": [ 00:16:03.129 "malloc1", 00:16:03.129 "malloc2", 00:16:03.129 "malloc3", 00:16:03.129 "malloc4" 00:16:03.389 ], 00:16:03.389 "strip_size_kb": 64, 00:16:03.389 "superblock": false, 00:16:03.390 "method": "bdev_raid_create", 00:16:03.390 "req_id": 1 00:16:03.390 } 00:16:03.390 Got JSON-RPC error response 00:16:03.390 response: 00:16:03.390 { 00:16:03.390 "code": -17, 00:16:03.390 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:03.390 } 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.390 [2024-12-06 23:50:14.747708] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:03.390 [2024-12-06 23:50:14.747764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.390 [2024-12-06 23:50:14.747778] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:03.390 [2024-12-06 23:50:14.747796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.390 [2024-12-06 23:50:14.749805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.390 [2024-12-06 23:50:14.749841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:03.390 [2024-12-06 23:50:14.749905] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:03.390 [2024-12-06 23:50:14.749956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:03.390 pt1 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.390 "name": "raid_bdev1", 00:16:03.390 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:03.390 "strip_size_kb": 64, 00:16:03.390 "state": "configuring", 00:16:03.390 "raid_level": "raid5f", 00:16:03.390 "superblock": true, 00:16:03.390 "num_base_bdevs": 4, 00:16:03.390 "num_base_bdevs_discovered": 1, 00:16:03.390 "num_base_bdevs_operational": 4, 00:16:03.390 "base_bdevs_list": [ 00:16:03.390 { 00:16:03.390 "name": "pt1", 00:16:03.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.390 "is_configured": true, 00:16:03.390 "data_offset": 2048, 00:16:03.390 "data_size": 63488 00:16:03.390 }, 00:16:03.390 { 00:16:03.390 "name": null, 00:16:03.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.390 "is_configured": false, 00:16:03.390 "data_offset": 2048, 00:16:03.390 "data_size": 63488 00:16:03.390 }, 00:16:03.390 { 00:16:03.390 "name": null, 00:16:03.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.390 "is_configured": false, 00:16:03.390 "data_offset": 2048, 00:16:03.390 "data_size": 63488 00:16:03.390 }, 00:16:03.390 { 00:16:03.390 "name": null, 00:16:03.390 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:03.390 "is_configured": false, 00:16:03.390 "data_offset": 2048, 00:16:03.390 "data_size": 63488 00:16:03.390 } 00:16:03.390 ] 00:16:03.390 }' 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.390 23:50:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.652 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:03.652 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.652 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.652 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.652 [2024-12-06 23:50:15.206903] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.652 [2024-12-06 23:50:15.206973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.652 [2024-12-06 23:50:15.206989] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:03.652 [2024-12-06 23:50:15.206999] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.652 [2024-12-06 23:50:15.207373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.652 [2024-12-06 23:50:15.207391] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.652 [2024-12-06 23:50:15.207453] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:03.652 [2024-12-06 23:50:15.207473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.652 pt2 00:16:03.652 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.652 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:03.652 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.652 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.912 [2024-12-06 23:50:15.218888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.912 "name": "raid_bdev1", 00:16:03.912 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:03.912 "strip_size_kb": 64, 00:16:03.912 "state": "configuring", 00:16:03.912 "raid_level": "raid5f", 00:16:03.912 "superblock": true, 00:16:03.912 "num_base_bdevs": 4, 00:16:03.912 "num_base_bdevs_discovered": 1, 00:16:03.912 "num_base_bdevs_operational": 4, 00:16:03.912 "base_bdevs_list": [ 00:16:03.912 { 00:16:03.912 "name": "pt1", 00:16:03.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.912 "is_configured": true, 00:16:03.912 "data_offset": 2048, 00:16:03.912 "data_size": 63488 00:16:03.912 }, 00:16:03.912 { 00:16:03.912 "name": null, 00:16:03.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.912 "is_configured": false, 00:16:03.912 "data_offset": 0, 00:16:03.912 "data_size": 63488 00:16:03.912 }, 00:16:03.912 { 00:16:03.912 "name": null, 00:16:03.912 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.912 "is_configured": false, 00:16:03.912 "data_offset": 2048, 00:16:03.912 "data_size": 63488 00:16:03.912 }, 00:16:03.912 { 00:16:03.912 "name": null, 00:16:03.912 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:03.912 "is_configured": false, 00:16:03.912 "data_offset": 2048, 00:16:03.912 "data_size": 63488 00:16:03.912 } 00:16:03.912 ] 00:16:03.912 }' 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.912 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.171 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:04.171 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.171 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.172 [2024-12-06 23:50:15.686114] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.172 [2024-12-06 23:50:15.686160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.172 [2024-12-06 23:50:15.686178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:04.172 [2024-12-06 23:50:15.686187] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.172 [2024-12-06 23:50:15.686562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.172 [2024-12-06 23:50:15.686588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.172 [2024-12-06 23:50:15.686651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:04.172 [2024-12-06 23:50:15.686682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.172 pt2 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.172 [2024-12-06 23:50:15.698094] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:04.172 [2024-12-06 23:50:15.698136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.172 [2024-12-06 23:50:15.698156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:04.172 [2024-12-06 23:50:15.698166] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.172 [2024-12-06 23:50:15.698491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.172 [2024-12-06 23:50:15.698516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:04.172 [2024-12-06 23:50:15.698568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:04.172 [2024-12-06 23:50:15.698593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:04.172 pt3 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.172 [2024-12-06 23:50:15.710054] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:04.172 [2024-12-06 23:50:15.710092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.172 [2024-12-06 23:50:15.710105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:04.172 [2024-12-06 23:50:15.710112] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.172 [2024-12-06 23:50:15.710426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.172 [2024-12-06 23:50:15.710446] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:04.172 [2024-12-06 23:50:15.710496] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:04.172 [2024-12-06 23:50:15.710514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:04.172 [2024-12-06 23:50:15.710638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:04.172 [2024-12-06 23:50:15.710650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:04.172 [2024-12-06 23:50:15.710889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:04.172 [2024-12-06 23:50:15.717200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:04.172 [2024-12-06 23:50:15.717227] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:04.172 [2024-12-06 23:50:15.717390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.172 pt4 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.172 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.432 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.432 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.432 "name": "raid_bdev1", 00:16:04.432 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:04.432 "strip_size_kb": 64, 00:16:04.432 "state": "online", 00:16:04.432 "raid_level": "raid5f", 00:16:04.432 "superblock": true, 00:16:04.432 "num_base_bdevs": 4, 00:16:04.432 "num_base_bdevs_discovered": 4, 00:16:04.432 "num_base_bdevs_operational": 4, 00:16:04.432 "base_bdevs_list": [ 00:16:04.432 { 00:16:04.432 "name": "pt1", 00:16:04.432 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.432 "is_configured": true, 00:16:04.432 "data_offset": 2048, 00:16:04.432 "data_size": 63488 00:16:04.432 }, 00:16:04.432 { 00:16:04.432 "name": "pt2", 00:16:04.432 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.432 "is_configured": true, 00:16:04.432 "data_offset": 2048, 00:16:04.432 "data_size": 63488 00:16:04.432 }, 00:16:04.432 { 00:16:04.432 "name": "pt3", 00:16:04.432 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.432 "is_configured": true, 00:16:04.432 "data_offset": 2048, 00:16:04.432 "data_size": 63488 00:16:04.432 }, 00:16:04.432 { 00:16:04.432 "name": "pt4", 00:16:04.432 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:04.432 "is_configured": true, 00:16:04.432 "data_offset": 2048, 00:16:04.432 "data_size": 63488 00:16:04.432 } 00:16:04.432 ] 00:16:04.432 }' 00:16:04.432 23:50:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.432 23:50:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.692 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:04.692 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:04.692 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.692 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.692 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.692 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.692 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.692 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.692 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.692 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.692 [2024-12-06 23:50:16.180972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.692 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.692 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:04.692 "name": "raid_bdev1", 00:16:04.692 "aliases": [ 00:16:04.692 "df0863e6-6d39-40f2-ba6a-c6363a321e68" 00:16:04.692 ], 00:16:04.692 "product_name": "Raid Volume", 00:16:04.692 "block_size": 512, 00:16:04.692 "num_blocks": 190464, 00:16:04.692 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:04.692 "assigned_rate_limits": { 00:16:04.693 "rw_ios_per_sec": 0, 00:16:04.693 "rw_mbytes_per_sec": 0, 00:16:04.693 "r_mbytes_per_sec": 0, 00:16:04.693 "w_mbytes_per_sec": 0 00:16:04.693 }, 00:16:04.693 "claimed": false, 00:16:04.693 "zoned": false, 00:16:04.693 "supported_io_types": { 00:16:04.693 "read": true, 00:16:04.693 "write": true, 00:16:04.693 "unmap": false, 00:16:04.693 "flush": false, 00:16:04.693 "reset": true, 00:16:04.693 "nvme_admin": false, 00:16:04.693 "nvme_io": false, 00:16:04.693 "nvme_io_md": false, 00:16:04.693 "write_zeroes": true, 00:16:04.693 "zcopy": false, 00:16:04.693 "get_zone_info": false, 00:16:04.693 "zone_management": false, 00:16:04.693 "zone_append": false, 00:16:04.693 "compare": false, 00:16:04.693 "compare_and_write": false, 00:16:04.693 "abort": false, 00:16:04.693 "seek_hole": false, 00:16:04.693 "seek_data": false, 00:16:04.693 "copy": false, 00:16:04.693 "nvme_iov_md": false 00:16:04.693 }, 00:16:04.693 "driver_specific": { 00:16:04.693 "raid": { 00:16:04.693 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:04.693 "strip_size_kb": 64, 00:16:04.693 "state": "online", 00:16:04.693 "raid_level": "raid5f", 00:16:04.693 "superblock": true, 00:16:04.693 "num_base_bdevs": 4, 00:16:04.693 "num_base_bdevs_discovered": 4, 00:16:04.693 "num_base_bdevs_operational": 4, 00:16:04.693 "base_bdevs_list": [ 00:16:04.693 { 00:16:04.693 "name": "pt1", 00:16:04.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.693 "is_configured": true, 00:16:04.693 "data_offset": 2048, 00:16:04.693 "data_size": 63488 00:16:04.693 }, 00:16:04.693 { 00:16:04.693 "name": "pt2", 00:16:04.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.693 "is_configured": true, 00:16:04.693 "data_offset": 2048, 00:16:04.693 "data_size": 63488 00:16:04.693 }, 00:16:04.693 { 00:16:04.693 "name": "pt3", 00:16:04.693 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.693 "is_configured": true, 00:16:04.693 "data_offset": 2048, 00:16:04.693 "data_size": 63488 00:16:04.693 }, 00:16:04.693 { 00:16:04.693 "name": "pt4", 00:16:04.693 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:04.693 "is_configured": true, 00:16:04.693 "data_offset": 2048, 00:16:04.693 "data_size": 63488 00:16:04.693 } 00:16:04.693 ] 00:16:04.693 } 00:16:04.693 } 00:16:04.693 }' 00:16:04.693 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.953 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:04.953 pt2 00:16:04.953 pt3 00:16:04.953 pt4' 00:16:04.953 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.953 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:04.953 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.953 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:04.953 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.954 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.954 [2024-12-06 23:50:16.496398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' df0863e6-6d39-40f2-ba6a-c6363a321e68 '!=' df0863e6-6d39-40f2-ba6a-c6363a321e68 ']' 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.214 [2024-12-06 23:50:16.528259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.214 "name": "raid_bdev1", 00:16:05.214 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:05.214 "strip_size_kb": 64, 00:16:05.214 "state": "online", 00:16:05.214 "raid_level": "raid5f", 00:16:05.214 "superblock": true, 00:16:05.214 "num_base_bdevs": 4, 00:16:05.214 "num_base_bdevs_discovered": 3, 00:16:05.214 "num_base_bdevs_operational": 3, 00:16:05.214 "base_bdevs_list": [ 00:16:05.214 { 00:16:05.214 "name": null, 00:16:05.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.214 "is_configured": false, 00:16:05.214 "data_offset": 0, 00:16:05.214 "data_size": 63488 00:16:05.214 }, 00:16:05.214 { 00:16:05.214 "name": "pt2", 00:16:05.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.214 "is_configured": true, 00:16:05.214 "data_offset": 2048, 00:16:05.214 "data_size": 63488 00:16:05.214 }, 00:16:05.214 { 00:16:05.214 "name": "pt3", 00:16:05.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.214 "is_configured": true, 00:16:05.214 "data_offset": 2048, 00:16:05.214 "data_size": 63488 00:16:05.214 }, 00:16:05.214 { 00:16:05.214 "name": "pt4", 00:16:05.214 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:05.214 "is_configured": true, 00:16:05.214 "data_offset": 2048, 00:16:05.214 "data_size": 63488 00:16:05.214 } 00:16:05.214 ] 00:16:05.214 }' 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.214 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.474 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.475 [2024-12-06 23:50:16.943534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.475 [2024-12-06 23:50:16.943563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.475 [2024-12-06 23:50:16.943629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.475 [2024-12-06 23:50:16.943700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.475 [2024-12-06 23:50:16.943708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.475 23:50:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.475 [2024-12-06 23:50:17.019410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:05.475 [2024-12-06 23:50:17.019467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.475 [2024-12-06 23:50:17.019482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:05.475 [2024-12-06 23:50:17.019490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.475 [2024-12-06 23:50:17.021565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.475 [2024-12-06 23:50:17.021598] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:05.475 [2024-12-06 23:50:17.021673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:05.475 [2024-12-06 23:50:17.021715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:05.475 pt2 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.475 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.735 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.735 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.735 "name": "raid_bdev1", 00:16:05.735 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:05.735 "strip_size_kb": 64, 00:16:05.735 "state": "configuring", 00:16:05.735 "raid_level": "raid5f", 00:16:05.735 "superblock": true, 00:16:05.735 "num_base_bdevs": 4, 00:16:05.735 "num_base_bdevs_discovered": 1, 00:16:05.735 "num_base_bdevs_operational": 3, 00:16:05.735 "base_bdevs_list": [ 00:16:05.735 { 00:16:05.735 "name": null, 00:16:05.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.735 "is_configured": false, 00:16:05.735 "data_offset": 2048, 00:16:05.735 "data_size": 63488 00:16:05.735 }, 00:16:05.735 { 00:16:05.735 "name": "pt2", 00:16:05.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.735 "is_configured": true, 00:16:05.735 "data_offset": 2048, 00:16:05.735 "data_size": 63488 00:16:05.735 }, 00:16:05.735 { 00:16:05.735 "name": null, 00:16:05.735 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.735 "is_configured": false, 00:16:05.735 "data_offset": 2048, 00:16:05.735 "data_size": 63488 00:16:05.735 }, 00:16:05.735 { 00:16:05.735 "name": null, 00:16:05.735 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:05.735 "is_configured": false, 00:16:05.735 "data_offset": 2048, 00:16:05.735 "data_size": 63488 00:16:05.735 } 00:16:05.735 ] 00:16:05.735 }' 00:16:05.735 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.735 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.995 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:05.995 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:05.995 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:05.995 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.995 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.995 [2024-12-06 23:50:17.462767] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:05.995 [2024-12-06 23:50:17.462833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.995 [2024-12-06 23:50:17.462856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:05.995 [2024-12-06 23:50:17.462865] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.995 [2024-12-06 23:50:17.463232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.995 [2024-12-06 23:50:17.463248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:05.995 [2024-12-06 23:50:17.463321] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:05.995 [2024-12-06 23:50:17.463340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:05.995 pt3 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.996 "name": "raid_bdev1", 00:16:05.996 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:05.996 "strip_size_kb": 64, 00:16:05.996 "state": "configuring", 00:16:05.996 "raid_level": "raid5f", 00:16:05.996 "superblock": true, 00:16:05.996 "num_base_bdevs": 4, 00:16:05.996 "num_base_bdevs_discovered": 2, 00:16:05.996 "num_base_bdevs_operational": 3, 00:16:05.996 "base_bdevs_list": [ 00:16:05.996 { 00:16:05.996 "name": null, 00:16:05.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.996 "is_configured": false, 00:16:05.996 "data_offset": 2048, 00:16:05.996 "data_size": 63488 00:16:05.996 }, 00:16:05.996 { 00:16:05.996 "name": "pt2", 00:16:05.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.996 "is_configured": true, 00:16:05.996 "data_offset": 2048, 00:16:05.996 "data_size": 63488 00:16:05.996 }, 00:16:05.996 { 00:16:05.996 "name": "pt3", 00:16:05.996 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.996 "is_configured": true, 00:16:05.996 "data_offset": 2048, 00:16:05.996 "data_size": 63488 00:16:05.996 }, 00:16:05.996 { 00:16:05.996 "name": null, 00:16:05.996 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:05.996 "is_configured": false, 00:16:05.996 "data_offset": 2048, 00:16:05.996 "data_size": 63488 00:16:05.996 } 00:16:05.996 ] 00:16:05.996 }' 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.996 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.566 [2024-12-06 23:50:17.937925] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:06.566 [2024-12-06 23:50:17.937984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.566 [2024-12-06 23:50:17.938001] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:06.566 [2024-12-06 23:50:17.938010] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.566 [2024-12-06 23:50:17.938361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.566 [2024-12-06 23:50:17.938376] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:06.566 [2024-12-06 23:50:17.938434] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:06.566 [2024-12-06 23:50:17.938456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:06.566 [2024-12-06 23:50:17.938573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:06.566 [2024-12-06 23:50:17.938581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:06.566 [2024-12-06 23:50:17.938820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:06.566 [2024-12-06 23:50:17.945866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:06.566 [2024-12-06 23:50:17.945895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:06.566 [2024-12-06 23:50:17.946174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.566 pt4 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.566 23:50:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.566 "name": "raid_bdev1", 00:16:06.566 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:06.566 "strip_size_kb": 64, 00:16:06.566 "state": "online", 00:16:06.566 "raid_level": "raid5f", 00:16:06.566 "superblock": true, 00:16:06.566 "num_base_bdevs": 4, 00:16:06.566 "num_base_bdevs_discovered": 3, 00:16:06.566 "num_base_bdevs_operational": 3, 00:16:06.566 "base_bdevs_list": [ 00:16:06.566 { 00:16:06.567 "name": null, 00:16:06.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.567 "is_configured": false, 00:16:06.567 "data_offset": 2048, 00:16:06.567 "data_size": 63488 00:16:06.567 }, 00:16:06.567 { 00:16:06.567 "name": "pt2", 00:16:06.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.567 "is_configured": true, 00:16:06.567 "data_offset": 2048, 00:16:06.567 "data_size": 63488 00:16:06.567 }, 00:16:06.567 { 00:16:06.567 "name": "pt3", 00:16:06.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.567 "is_configured": true, 00:16:06.567 "data_offset": 2048, 00:16:06.567 "data_size": 63488 00:16:06.567 }, 00:16:06.567 { 00:16:06.567 "name": "pt4", 00:16:06.567 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:06.567 "is_configured": true, 00:16:06.567 "data_offset": 2048, 00:16:06.567 "data_size": 63488 00:16:06.567 } 00:16:06.567 ] 00:16:06.567 }' 00:16:06.567 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.567 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.827 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.827 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.827 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.827 [2024-12-06 23:50:18.374003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.827 [2024-12-06 23:50:18.374031] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.827 [2024-12-06 23:50:18.374090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.827 [2024-12-06 23:50:18.374153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.827 [2024-12-06 23:50:18.374163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:06.827 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.827 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.827 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.827 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.827 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.088 [2024-12-06 23:50:18.449868] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.088 [2024-12-06 23:50:18.449918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.088 [2024-12-06 23:50:18.449940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:07.088 [2024-12-06 23:50:18.449952] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.088 [2024-12-06 23:50:18.452116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.088 [2024-12-06 23:50:18.452155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.088 [2024-12-06 23:50:18.452227] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:07.088 [2024-12-06 23:50:18.452269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:07.088 [2024-12-06 23:50:18.452389] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:07.088 [2024-12-06 23:50:18.452407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.088 [2024-12-06 23:50:18.452421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:07.088 [2024-12-06 23:50:18.452476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:07.088 [2024-12-06 23:50:18.452574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:07.088 pt1 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.088 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.088 "name": "raid_bdev1", 00:16:07.088 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:07.088 "strip_size_kb": 64, 00:16:07.088 "state": "configuring", 00:16:07.088 "raid_level": "raid5f", 00:16:07.088 "superblock": true, 00:16:07.088 "num_base_bdevs": 4, 00:16:07.088 "num_base_bdevs_discovered": 2, 00:16:07.088 "num_base_bdevs_operational": 3, 00:16:07.088 "base_bdevs_list": [ 00:16:07.088 { 00:16:07.088 "name": null, 00:16:07.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.089 "is_configured": false, 00:16:07.089 "data_offset": 2048, 00:16:07.089 "data_size": 63488 00:16:07.089 }, 00:16:07.089 { 00:16:07.089 "name": "pt2", 00:16:07.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.089 "is_configured": true, 00:16:07.089 "data_offset": 2048, 00:16:07.089 "data_size": 63488 00:16:07.089 }, 00:16:07.089 { 00:16:07.089 "name": "pt3", 00:16:07.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.089 "is_configured": true, 00:16:07.089 "data_offset": 2048, 00:16:07.089 "data_size": 63488 00:16:07.089 }, 00:16:07.089 { 00:16:07.089 "name": null, 00:16:07.089 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.089 "is_configured": false, 00:16:07.089 "data_offset": 2048, 00:16:07.089 "data_size": 63488 00:16:07.089 } 00:16:07.089 ] 00:16:07.089 }' 00:16:07.089 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.089 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.349 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:07.349 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:07.349 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.349 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.349 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.610 [2024-12-06 23:50:18.925064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:07.610 [2024-12-06 23:50:18.925105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.610 [2024-12-06 23:50:18.925121] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:07.610 [2024-12-06 23:50:18.925130] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.610 [2024-12-06 23:50:18.925468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.610 [2024-12-06 23:50:18.925483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:07.610 [2024-12-06 23:50:18.925539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:07.610 [2024-12-06 23:50:18.925556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:07.610 [2024-12-06 23:50:18.925685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:07.610 [2024-12-06 23:50:18.925694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:07.610 [2024-12-06 23:50:18.925917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:07.610 [2024-12-06 23:50:18.933197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:07.610 [2024-12-06 23:50:18.933234] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:07.610 [2024-12-06 23:50:18.933463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.610 pt4 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.610 "name": "raid_bdev1", 00:16:07.610 "uuid": "df0863e6-6d39-40f2-ba6a-c6363a321e68", 00:16:07.610 "strip_size_kb": 64, 00:16:07.610 "state": "online", 00:16:07.610 "raid_level": "raid5f", 00:16:07.610 "superblock": true, 00:16:07.610 "num_base_bdevs": 4, 00:16:07.610 "num_base_bdevs_discovered": 3, 00:16:07.610 "num_base_bdevs_operational": 3, 00:16:07.610 "base_bdevs_list": [ 00:16:07.610 { 00:16:07.610 "name": null, 00:16:07.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.610 "is_configured": false, 00:16:07.610 "data_offset": 2048, 00:16:07.610 "data_size": 63488 00:16:07.610 }, 00:16:07.610 { 00:16:07.610 "name": "pt2", 00:16:07.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.610 "is_configured": true, 00:16:07.610 "data_offset": 2048, 00:16:07.610 "data_size": 63488 00:16:07.610 }, 00:16:07.610 { 00:16:07.610 "name": "pt3", 00:16:07.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.610 "is_configured": true, 00:16:07.610 "data_offset": 2048, 00:16:07.610 "data_size": 63488 00:16:07.610 }, 00:16:07.610 { 00:16:07.610 "name": "pt4", 00:16:07.610 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.610 "is_configured": true, 00:16:07.610 "data_offset": 2048, 00:16:07.610 "data_size": 63488 00:16:07.610 } 00:16:07.610 ] 00:16:07.610 }' 00:16:07.610 23:50:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.611 23:50:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.871 23:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:07.871 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.871 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.871 23:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:07.871 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.871 23:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.131 [2024-12-06 23:50:19.441148] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' df0863e6-6d39-40f2-ba6a-c6363a321e68 '!=' df0863e6-6d39-40f2-ba6a-c6363a321e68 ']' 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84020 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84020 ']' 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84020 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84020 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.131 killing process with pid 84020 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84020' 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84020 00:16:08.131 [2024-12-06 23:50:19.521362] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.131 [2024-12-06 23:50:19.521436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.131 [2024-12-06 23:50:19.521505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.131 [2024-12-06 23:50:19.521520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:08.131 23:50:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84020 00:16:08.391 [2024-12-06 23:50:19.889443] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:09.774 ************************************ 00:16:09.774 END TEST raid5f_superblock_test 00:16:09.774 ************************************ 00:16:09.774 23:50:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:09.774 00:16:09.774 real 0m8.408s 00:16:09.774 user 0m13.269s 00:16:09.774 sys 0m1.590s 00:16:09.774 23:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.774 23:50:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.774 23:50:21 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:09.774 23:50:21 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:09.774 23:50:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:09.774 23:50:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.774 23:50:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:09.774 ************************************ 00:16:09.774 START TEST raid5f_rebuild_test 00:16:09.774 ************************************ 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84507 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84507 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84507 ']' 00:16:09.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.774 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.774 [2024-12-06 23:50:21.130506] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:16:09.774 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:09.774 Zero copy mechanism will not be used. 00:16:09.774 [2024-12-06 23:50:21.130691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84507 ] 00:16:09.774 [2024-12-06 23:50:21.304730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.034 [2024-12-06 23:50:21.407090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.034 [2024-12-06 23:50:21.594030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.034 [2024-12-06 23:50:21.594145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.605 BaseBdev1_malloc 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.605 [2024-12-06 23:50:21.983634] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:10.605 [2024-12-06 23:50:21.983762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.605 [2024-12-06 23:50:21.983804] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:10.605 [2024-12-06 23:50:21.983849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.605 [2024-12-06 23:50:21.985870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.605 [2024-12-06 23:50:21.985940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.605 BaseBdev1 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.605 23:50:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.605 BaseBdev2_malloc 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.605 [2024-12-06 23:50:22.037858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:10.605 [2024-12-06 23:50:22.037914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.605 [2024-12-06 23:50:22.037936] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:10.605 [2024-12-06 23:50:22.037945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.605 [2024-12-06 23:50:22.039927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.605 [2024-12-06 23:50:22.039966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:10.605 BaseBdev2 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.605 BaseBdev3_malloc 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.605 [2024-12-06 23:50:22.121532] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:10.605 [2024-12-06 23:50:22.121622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.605 [2024-12-06 23:50:22.121646] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:10.605 [2024-12-06 23:50:22.121656] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.605 [2024-12-06 23:50:22.123685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.605 [2024-12-06 23:50:22.123723] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:10.605 BaseBdev3 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.605 BaseBdev4_malloc 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.605 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.865 [2024-12-06 23:50:22.170222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:10.865 [2024-12-06 23:50:22.170278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.865 [2024-12-06 23:50:22.170298] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:10.865 [2024-12-06 23:50:22.170307] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.865 [2024-12-06 23:50:22.172544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.865 [2024-12-06 23:50:22.172585] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:10.865 BaseBdev4 00:16:10.865 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.866 spare_malloc 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.866 spare_delay 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.866 [2024-12-06 23:50:22.235489] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.866 [2024-12-06 23:50:22.235537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.866 [2024-12-06 23:50:22.235553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:10.866 [2024-12-06 23:50:22.235562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.866 [2024-12-06 23:50:22.237669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.866 [2024-12-06 23:50:22.237720] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.866 spare 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.866 [2024-12-06 23:50:22.247518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.866 [2024-12-06 23:50:22.249243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.866 [2024-12-06 23:50:22.249303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.866 [2024-12-06 23:50:22.249352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:10.866 [2024-12-06 23:50:22.249455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:10.866 [2024-12-06 23:50:22.249465] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:10.866 [2024-12-06 23:50:22.249702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:10.866 [2024-12-06 23:50:22.256100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:10.866 [2024-12-06 23:50:22.256180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:10.866 [2024-12-06 23:50:22.256390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.866 "name": "raid_bdev1", 00:16:10.866 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:10.866 "strip_size_kb": 64, 00:16:10.866 "state": "online", 00:16:10.866 "raid_level": "raid5f", 00:16:10.866 "superblock": false, 00:16:10.866 "num_base_bdevs": 4, 00:16:10.866 "num_base_bdevs_discovered": 4, 00:16:10.866 "num_base_bdevs_operational": 4, 00:16:10.866 "base_bdevs_list": [ 00:16:10.866 { 00:16:10.866 "name": "BaseBdev1", 00:16:10.866 "uuid": "cfc2ada1-dcb1-5d90-b42c-15c032f36639", 00:16:10.866 "is_configured": true, 00:16:10.866 "data_offset": 0, 00:16:10.866 "data_size": 65536 00:16:10.866 }, 00:16:10.866 { 00:16:10.866 "name": "BaseBdev2", 00:16:10.866 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:10.866 "is_configured": true, 00:16:10.866 "data_offset": 0, 00:16:10.866 "data_size": 65536 00:16:10.866 }, 00:16:10.866 { 00:16:10.866 "name": "BaseBdev3", 00:16:10.866 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:10.866 "is_configured": true, 00:16:10.866 "data_offset": 0, 00:16:10.866 "data_size": 65536 00:16:10.866 }, 00:16:10.866 { 00:16:10.866 "name": "BaseBdev4", 00:16:10.866 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:10.866 "is_configured": true, 00:16:10.866 "data_offset": 0, 00:16:10.866 "data_size": 65536 00:16:10.866 } 00:16:10.866 ] 00:16:10.866 }' 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.866 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:11.436 [2024-12-06 23:50:22.728064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:11.436 23:50:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:11.436 [2024-12-06 23:50:22.987527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:11.695 /dev/nbd0 00:16:11.695 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.696 1+0 records in 00:16:11.696 1+0 records out 00:16:11.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565347 s, 7.2 MB/s 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:11.696 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:12.265 512+0 records in 00:16:12.265 512+0 records out 00:16:12.265 100663296 bytes (101 MB, 96 MiB) copied, 0.588737 s, 171 MB/s 00:16:12.265 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:12.265 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.265 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:12.265 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:12.265 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:12.265 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.265 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:12.524 [2024-12-06 23:50:23.883618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.524 [2024-12-06 23:50:23.916324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.524 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.524 "name": "raid_bdev1", 00:16:12.524 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:12.524 "strip_size_kb": 64, 00:16:12.524 "state": "online", 00:16:12.524 "raid_level": "raid5f", 00:16:12.524 "superblock": false, 00:16:12.524 "num_base_bdevs": 4, 00:16:12.524 "num_base_bdevs_discovered": 3, 00:16:12.524 "num_base_bdevs_operational": 3, 00:16:12.524 "base_bdevs_list": [ 00:16:12.524 { 00:16:12.524 "name": null, 00:16:12.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.524 "is_configured": false, 00:16:12.524 "data_offset": 0, 00:16:12.524 "data_size": 65536 00:16:12.524 }, 00:16:12.524 { 00:16:12.524 "name": "BaseBdev2", 00:16:12.524 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:12.524 "is_configured": true, 00:16:12.524 "data_offset": 0, 00:16:12.524 "data_size": 65536 00:16:12.524 }, 00:16:12.524 { 00:16:12.524 "name": "BaseBdev3", 00:16:12.524 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:12.524 "is_configured": true, 00:16:12.524 "data_offset": 0, 00:16:12.524 "data_size": 65536 00:16:12.524 }, 00:16:12.524 { 00:16:12.525 "name": "BaseBdev4", 00:16:12.525 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:12.525 "is_configured": true, 00:16:12.525 "data_offset": 0, 00:16:12.525 "data_size": 65536 00:16:12.525 } 00:16:12.525 ] 00:16:12.525 }' 00:16:12.525 23:50:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.525 23:50:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.092 23:50:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.092 23:50:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.092 23:50:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.092 [2024-12-06 23:50:24.355669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.093 [2024-12-06 23:50:24.371297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:13.093 23:50:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.093 23:50:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:13.093 [2024-12-06 23:50:24.380513] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.029 "name": "raid_bdev1", 00:16:14.029 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:14.029 "strip_size_kb": 64, 00:16:14.029 "state": "online", 00:16:14.029 "raid_level": "raid5f", 00:16:14.029 "superblock": false, 00:16:14.029 "num_base_bdevs": 4, 00:16:14.029 "num_base_bdevs_discovered": 4, 00:16:14.029 "num_base_bdevs_operational": 4, 00:16:14.029 "process": { 00:16:14.029 "type": "rebuild", 00:16:14.029 "target": "spare", 00:16:14.029 "progress": { 00:16:14.029 "blocks": 19200, 00:16:14.029 "percent": 9 00:16:14.029 } 00:16:14.029 }, 00:16:14.029 "base_bdevs_list": [ 00:16:14.029 { 00:16:14.029 "name": "spare", 00:16:14.029 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:14.029 "is_configured": true, 00:16:14.029 "data_offset": 0, 00:16:14.029 "data_size": 65536 00:16:14.029 }, 00:16:14.029 { 00:16:14.029 "name": "BaseBdev2", 00:16:14.029 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:14.029 "is_configured": true, 00:16:14.029 "data_offset": 0, 00:16:14.029 "data_size": 65536 00:16:14.029 }, 00:16:14.029 { 00:16:14.029 "name": "BaseBdev3", 00:16:14.029 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:14.029 "is_configured": true, 00:16:14.029 "data_offset": 0, 00:16:14.029 "data_size": 65536 00:16:14.029 }, 00:16:14.029 { 00:16:14.029 "name": "BaseBdev4", 00:16:14.029 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:14.029 "is_configured": true, 00:16:14.029 "data_offset": 0, 00:16:14.029 "data_size": 65536 00:16:14.029 } 00:16:14.029 ] 00:16:14.029 }' 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.029 23:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.029 [2024-12-06 23:50:25.539255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.029 [2024-12-06 23:50:25.586197] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:14.029 [2024-12-06 23:50:25.586321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.029 [2024-12-06 23:50:25.586340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.029 [2024-12-06 23:50:25.586350] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.288 "name": "raid_bdev1", 00:16:14.288 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:14.288 "strip_size_kb": 64, 00:16:14.288 "state": "online", 00:16:14.288 "raid_level": "raid5f", 00:16:14.288 "superblock": false, 00:16:14.288 "num_base_bdevs": 4, 00:16:14.288 "num_base_bdevs_discovered": 3, 00:16:14.288 "num_base_bdevs_operational": 3, 00:16:14.288 "base_bdevs_list": [ 00:16:14.288 { 00:16:14.288 "name": null, 00:16:14.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.288 "is_configured": false, 00:16:14.288 "data_offset": 0, 00:16:14.288 "data_size": 65536 00:16:14.288 }, 00:16:14.288 { 00:16:14.288 "name": "BaseBdev2", 00:16:14.288 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:14.288 "is_configured": true, 00:16:14.288 "data_offset": 0, 00:16:14.288 "data_size": 65536 00:16:14.288 }, 00:16:14.288 { 00:16:14.288 "name": "BaseBdev3", 00:16:14.288 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:14.288 "is_configured": true, 00:16:14.288 "data_offset": 0, 00:16:14.288 "data_size": 65536 00:16:14.288 }, 00:16:14.288 { 00:16:14.288 "name": "BaseBdev4", 00:16:14.288 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:14.288 "is_configured": true, 00:16:14.288 "data_offset": 0, 00:16:14.288 "data_size": 65536 00:16:14.288 } 00:16:14.288 ] 00:16:14.288 }' 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.288 23:50:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.548 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.548 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.548 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.548 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.548 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.548 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.548 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.548 23:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.548 23:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.808 23:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.808 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.808 "name": "raid_bdev1", 00:16:14.808 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:14.808 "strip_size_kb": 64, 00:16:14.808 "state": "online", 00:16:14.808 "raid_level": "raid5f", 00:16:14.808 "superblock": false, 00:16:14.808 "num_base_bdevs": 4, 00:16:14.808 "num_base_bdevs_discovered": 3, 00:16:14.808 "num_base_bdevs_operational": 3, 00:16:14.808 "base_bdevs_list": [ 00:16:14.808 { 00:16:14.808 "name": null, 00:16:14.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.808 "is_configured": false, 00:16:14.808 "data_offset": 0, 00:16:14.808 "data_size": 65536 00:16:14.808 }, 00:16:14.808 { 00:16:14.808 "name": "BaseBdev2", 00:16:14.808 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:14.808 "is_configured": true, 00:16:14.808 "data_offset": 0, 00:16:14.808 "data_size": 65536 00:16:14.808 }, 00:16:14.808 { 00:16:14.808 "name": "BaseBdev3", 00:16:14.808 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:14.808 "is_configured": true, 00:16:14.808 "data_offset": 0, 00:16:14.808 "data_size": 65536 00:16:14.808 }, 00:16:14.808 { 00:16:14.808 "name": "BaseBdev4", 00:16:14.808 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:14.808 "is_configured": true, 00:16:14.808 "data_offset": 0, 00:16:14.808 "data_size": 65536 00:16:14.808 } 00:16:14.808 ] 00:16:14.808 }' 00:16:14.808 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.808 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.808 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.808 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.808 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:14.808 23:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.808 23:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.808 [2024-12-06 23:50:26.246653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.808 [2024-12-06 23:50:26.261108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:14.808 23:50:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.808 23:50:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:14.808 [2024-12-06 23:50:26.270067] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:15.749 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.749 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.749 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.749 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.749 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.750 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.750 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.750 23:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.750 23:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.750 23:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.010 "name": "raid_bdev1", 00:16:16.010 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:16.010 "strip_size_kb": 64, 00:16:16.010 "state": "online", 00:16:16.010 "raid_level": "raid5f", 00:16:16.010 "superblock": false, 00:16:16.010 "num_base_bdevs": 4, 00:16:16.010 "num_base_bdevs_discovered": 4, 00:16:16.010 "num_base_bdevs_operational": 4, 00:16:16.010 "process": { 00:16:16.010 "type": "rebuild", 00:16:16.010 "target": "spare", 00:16:16.010 "progress": { 00:16:16.010 "blocks": 19200, 00:16:16.010 "percent": 9 00:16:16.010 } 00:16:16.010 }, 00:16:16.010 "base_bdevs_list": [ 00:16:16.010 { 00:16:16.010 "name": "spare", 00:16:16.010 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:16.010 "is_configured": true, 00:16:16.010 "data_offset": 0, 00:16:16.010 "data_size": 65536 00:16:16.010 }, 00:16:16.010 { 00:16:16.010 "name": "BaseBdev2", 00:16:16.010 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:16.010 "is_configured": true, 00:16:16.010 "data_offset": 0, 00:16:16.010 "data_size": 65536 00:16:16.010 }, 00:16:16.010 { 00:16:16.010 "name": "BaseBdev3", 00:16:16.010 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:16.010 "is_configured": true, 00:16:16.010 "data_offset": 0, 00:16:16.010 "data_size": 65536 00:16:16.010 }, 00:16:16.010 { 00:16:16.010 "name": "BaseBdev4", 00:16:16.010 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:16.010 "is_configured": true, 00:16:16.010 "data_offset": 0, 00:16:16.010 "data_size": 65536 00:16:16.010 } 00:16:16.010 ] 00:16:16.010 }' 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=616 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.010 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.010 "name": "raid_bdev1", 00:16:16.010 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:16.010 "strip_size_kb": 64, 00:16:16.010 "state": "online", 00:16:16.010 "raid_level": "raid5f", 00:16:16.010 "superblock": false, 00:16:16.010 "num_base_bdevs": 4, 00:16:16.010 "num_base_bdevs_discovered": 4, 00:16:16.010 "num_base_bdevs_operational": 4, 00:16:16.010 "process": { 00:16:16.010 "type": "rebuild", 00:16:16.010 "target": "spare", 00:16:16.010 "progress": { 00:16:16.010 "blocks": 21120, 00:16:16.010 "percent": 10 00:16:16.010 } 00:16:16.010 }, 00:16:16.010 "base_bdevs_list": [ 00:16:16.010 { 00:16:16.010 "name": "spare", 00:16:16.010 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:16.010 "is_configured": true, 00:16:16.010 "data_offset": 0, 00:16:16.010 "data_size": 65536 00:16:16.011 }, 00:16:16.011 { 00:16:16.011 "name": "BaseBdev2", 00:16:16.011 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:16.011 "is_configured": true, 00:16:16.011 "data_offset": 0, 00:16:16.011 "data_size": 65536 00:16:16.011 }, 00:16:16.011 { 00:16:16.011 "name": "BaseBdev3", 00:16:16.011 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:16.011 "is_configured": true, 00:16:16.011 "data_offset": 0, 00:16:16.011 "data_size": 65536 00:16:16.011 }, 00:16:16.011 { 00:16:16.011 "name": "BaseBdev4", 00:16:16.011 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:16.011 "is_configured": true, 00:16:16.011 "data_offset": 0, 00:16:16.011 "data_size": 65536 00:16:16.011 } 00:16:16.011 ] 00:16:16.011 }' 00:16:16.011 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.011 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.011 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.011 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.011 23:50:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.951 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.951 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.951 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.951 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.951 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.951 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.211 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.211 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.211 23:50:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.211 23:50:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.211 23:50:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.211 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.211 "name": "raid_bdev1", 00:16:17.211 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:17.211 "strip_size_kb": 64, 00:16:17.211 "state": "online", 00:16:17.211 "raid_level": "raid5f", 00:16:17.211 "superblock": false, 00:16:17.211 "num_base_bdevs": 4, 00:16:17.211 "num_base_bdevs_discovered": 4, 00:16:17.211 "num_base_bdevs_operational": 4, 00:16:17.211 "process": { 00:16:17.211 "type": "rebuild", 00:16:17.211 "target": "spare", 00:16:17.211 "progress": { 00:16:17.211 "blocks": 42240, 00:16:17.211 "percent": 21 00:16:17.211 } 00:16:17.211 }, 00:16:17.211 "base_bdevs_list": [ 00:16:17.211 { 00:16:17.211 "name": "spare", 00:16:17.211 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:17.211 "is_configured": true, 00:16:17.211 "data_offset": 0, 00:16:17.211 "data_size": 65536 00:16:17.211 }, 00:16:17.211 { 00:16:17.211 "name": "BaseBdev2", 00:16:17.211 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:17.211 "is_configured": true, 00:16:17.211 "data_offset": 0, 00:16:17.211 "data_size": 65536 00:16:17.211 }, 00:16:17.211 { 00:16:17.211 "name": "BaseBdev3", 00:16:17.211 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:17.211 "is_configured": true, 00:16:17.211 "data_offset": 0, 00:16:17.211 "data_size": 65536 00:16:17.211 }, 00:16:17.211 { 00:16:17.211 "name": "BaseBdev4", 00:16:17.211 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:17.211 "is_configured": true, 00:16:17.211 "data_offset": 0, 00:16:17.211 "data_size": 65536 00:16:17.211 } 00:16:17.211 ] 00:16:17.211 }' 00:16:17.211 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.211 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.211 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.211 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.211 23:50:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.148 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.148 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.148 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.148 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.148 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.148 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.148 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.148 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.148 23:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.148 23:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.148 23:50:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.407 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.407 "name": "raid_bdev1", 00:16:18.407 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:18.407 "strip_size_kb": 64, 00:16:18.407 "state": "online", 00:16:18.407 "raid_level": "raid5f", 00:16:18.407 "superblock": false, 00:16:18.407 "num_base_bdevs": 4, 00:16:18.407 "num_base_bdevs_discovered": 4, 00:16:18.407 "num_base_bdevs_operational": 4, 00:16:18.407 "process": { 00:16:18.407 "type": "rebuild", 00:16:18.407 "target": "spare", 00:16:18.407 "progress": { 00:16:18.407 "blocks": 63360, 00:16:18.407 "percent": 32 00:16:18.407 } 00:16:18.407 }, 00:16:18.407 "base_bdevs_list": [ 00:16:18.407 { 00:16:18.407 "name": "spare", 00:16:18.407 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:18.407 "is_configured": true, 00:16:18.407 "data_offset": 0, 00:16:18.407 "data_size": 65536 00:16:18.407 }, 00:16:18.407 { 00:16:18.407 "name": "BaseBdev2", 00:16:18.407 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:18.407 "is_configured": true, 00:16:18.407 "data_offset": 0, 00:16:18.407 "data_size": 65536 00:16:18.407 }, 00:16:18.407 { 00:16:18.407 "name": "BaseBdev3", 00:16:18.407 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:18.407 "is_configured": true, 00:16:18.407 "data_offset": 0, 00:16:18.407 "data_size": 65536 00:16:18.407 }, 00:16:18.407 { 00:16:18.407 "name": "BaseBdev4", 00:16:18.407 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:18.408 "is_configured": true, 00:16:18.408 "data_offset": 0, 00:16:18.408 "data_size": 65536 00:16:18.408 } 00:16:18.408 ] 00:16:18.408 }' 00:16:18.408 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.408 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.408 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.408 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.408 23:50:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.349 "name": "raid_bdev1", 00:16:19.349 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:19.349 "strip_size_kb": 64, 00:16:19.349 "state": "online", 00:16:19.349 "raid_level": "raid5f", 00:16:19.349 "superblock": false, 00:16:19.349 "num_base_bdevs": 4, 00:16:19.349 "num_base_bdevs_discovered": 4, 00:16:19.349 "num_base_bdevs_operational": 4, 00:16:19.349 "process": { 00:16:19.349 "type": "rebuild", 00:16:19.349 "target": "spare", 00:16:19.349 "progress": { 00:16:19.349 "blocks": 86400, 00:16:19.349 "percent": 43 00:16:19.349 } 00:16:19.349 }, 00:16:19.349 "base_bdevs_list": [ 00:16:19.349 { 00:16:19.349 "name": "spare", 00:16:19.349 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:19.349 "is_configured": true, 00:16:19.349 "data_offset": 0, 00:16:19.349 "data_size": 65536 00:16:19.349 }, 00:16:19.349 { 00:16:19.349 "name": "BaseBdev2", 00:16:19.349 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:19.349 "is_configured": true, 00:16:19.349 "data_offset": 0, 00:16:19.349 "data_size": 65536 00:16:19.349 }, 00:16:19.349 { 00:16:19.349 "name": "BaseBdev3", 00:16:19.349 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:19.349 "is_configured": true, 00:16:19.349 "data_offset": 0, 00:16:19.349 "data_size": 65536 00:16:19.349 }, 00:16:19.349 { 00:16:19.349 "name": "BaseBdev4", 00:16:19.349 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:19.349 "is_configured": true, 00:16:19.349 "data_offset": 0, 00:16:19.349 "data_size": 65536 00:16:19.349 } 00:16:19.349 ] 00:16:19.349 }' 00:16:19.349 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.609 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.609 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.609 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.609 23:50:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.550 23:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.550 23:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.550 23:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.550 23:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.550 23:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.550 23:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.550 23:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.550 23:50:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.550 23:50:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.550 23:50:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.550 23:50:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.550 23:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.550 "name": "raid_bdev1", 00:16:20.550 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:20.550 "strip_size_kb": 64, 00:16:20.550 "state": "online", 00:16:20.550 "raid_level": "raid5f", 00:16:20.550 "superblock": false, 00:16:20.550 "num_base_bdevs": 4, 00:16:20.550 "num_base_bdevs_discovered": 4, 00:16:20.550 "num_base_bdevs_operational": 4, 00:16:20.550 "process": { 00:16:20.550 "type": "rebuild", 00:16:20.550 "target": "spare", 00:16:20.550 "progress": { 00:16:20.550 "blocks": 107520, 00:16:20.550 "percent": 54 00:16:20.550 } 00:16:20.550 }, 00:16:20.550 "base_bdevs_list": [ 00:16:20.550 { 00:16:20.550 "name": "spare", 00:16:20.550 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:20.550 "is_configured": true, 00:16:20.550 "data_offset": 0, 00:16:20.550 "data_size": 65536 00:16:20.550 }, 00:16:20.550 { 00:16:20.550 "name": "BaseBdev2", 00:16:20.550 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:20.550 "is_configured": true, 00:16:20.550 "data_offset": 0, 00:16:20.550 "data_size": 65536 00:16:20.550 }, 00:16:20.550 { 00:16:20.550 "name": "BaseBdev3", 00:16:20.550 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:20.550 "is_configured": true, 00:16:20.550 "data_offset": 0, 00:16:20.550 "data_size": 65536 00:16:20.550 }, 00:16:20.550 { 00:16:20.550 "name": "BaseBdev4", 00:16:20.550 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:20.550 "is_configured": true, 00:16:20.550 "data_offset": 0, 00:16:20.550 "data_size": 65536 00:16:20.550 } 00:16:20.550 ] 00:16:20.550 }' 00:16:20.550 23:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.550 23:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.550 23:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.550 23:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.550 23:50:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.934 "name": "raid_bdev1", 00:16:21.934 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:21.934 "strip_size_kb": 64, 00:16:21.934 "state": "online", 00:16:21.934 "raid_level": "raid5f", 00:16:21.934 "superblock": false, 00:16:21.934 "num_base_bdevs": 4, 00:16:21.934 "num_base_bdevs_discovered": 4, 00:16:21.934 "num_base_bdevs_operational": 4, 00:16:21.934 "process": { 00:16:21.934 "type": "rebuild", 00:16:21.934 "target": "spare", 00:16:21.934 "progress": { 00:16:21.934 "blocks": 130560, 00:16:21.934 "percent": 66 00:16:21.934 } 00:16:21.934 }, 00:16:21.934 "base_bdevs_list": [ 00:16:21.934 { 00:16:21.934 "name": "spare", 00:16:21.934 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:21.934 "is_configured": true, 00:16:21.934 "data_offset": 0, 00:16:21.934 "data_size": 65536 00:16:21.934 }, 00:16:21.934 { 00:16:21.934 "name": "BaseBdev2", 00:16:21.934 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:21.934 "is_configured": true, 00:16:21.934 "data_offset": 0, 00:16:21.934 "data_size": 65536 00:16:21.934 }, 00:16:21.934 { 00:16:21.934 "name": "BaseBdev3", 00:16:21.934 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:21.934 "is_configured": true, 00:16:21.934 "data_offset": 0, 00:16:21.934 "data_size": 65536 00:16:21.934 }, 00:16:21.934 { 00:16:21.934 "name": "BaseBdev4", 00:16:21.934 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:21.934 "is_configured": true, 00:16:21.934 "data_offset": 0, 00:16:21.934 "data_size": 65536 00:16:21.934 } 00:16:21.934 ] 00:16:21.934 }' 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.934 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.935 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.935 23:50:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.878 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.878 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.878 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.878 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.878 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.878 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.878 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.878 23:50:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.878 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.878 23:50:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.878 23:50:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.878 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.878 "name": "raid_bdev1", 00:16:22.878 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:22.878 "strip_size_kb": 64, 00:16:22.878 "state": "online", 00:16:22.878 "raid_level": "raid5f", 00:16:22.878 "superblock": false, 00:16:22.878 "num_base_bdevs": 4, 00:16:22.878 "num_base_bdevs_discovered": 4, 00:16:22.878 "num_base_bdevs_operational": 4, 00:16:22.878 "process": { 00:16:22.878 "type": "rebuild", 00:16:22.878 "target": "spare", 00:16:22.878 "progress": { 00:16:22.878 "blocks": 151680, 00:16:22.878 "percent": 77 00:16:22.878 } 00:16:22.878 }, 00:16:22.878 "base_bdevs_list": [ 00:16:22.878 { 00:16:22.878 "name": "spare", 00:16:22.878 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:22.878 "is_configured": true, 00:16:22.878 "data_offset": 0, 00:16:22.878 "data_size": 65536 00:16:22.878 }, 00:16:22.878 { 00:16:22.878 "name": "BaseBdev2", 00:16:22.878 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:22.878 "is_configured": true, 00:16:22.878 "data_offset": 0, 00:16:22.878 "data_size": 65536 00:16:22.878 }, 00:16:22.878 { 00:16:22.878 "name": "BaseBdev3", 00:16:22.878 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:22.878 "is_configured": true, 00:16:22.878 "data_offset": 0, 00:16:22.878 "data_size": 65536 00:16:22.878 }, 00:16:22.879 { 00:16:22.879 "name": "BaseBdev4", 00:16:22.879 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:22.879 "is_configured": true, 00:16:22.879 "data_offset": 0, 00:16:22.879 "data_size": 65536 00:16:22.879 } 00:16:22.879 ] 00:16:22.879 }' 00:16:22.879 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.879 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.879 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.879 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.879 23:50:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:23.858 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.858 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.126 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.126 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.126 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.126 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.126 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.126 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.126 23:50:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.126 23:50:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.126 23:50:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.126 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.126 "name": "raid_bdev1", 00:16:24.126 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:24.126 "strip_size_kb": 64, 00:16:24.126 "state": "online", 00:16:24.126 "raid_level": "raid5f", 00:16:24.126 "superblock": false, 00:16:24.126 "num_base_bdevs": 4, 00:16:24.126 "num_base_bdevs_discovered": 4, 00:16:24.126 "num_base_bdevs_operational": 4, 00:16:24.126 "process": { 00:16:24.126 "type": "rebuild", 00:16:24.126 "target": "spare", 00:16:24.126 "progress": { 00:16:24.126 "blocks": 174720, 00:16:24.126 "percent": 88 00:16:24.126 } 00:16:24.126 }, 00:16:24.126 "base_bdevs_list": [ 00:16:24.126 { 00:16:24.126 "name": "spare", 00:16:24.126 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:24.126 "is_configured": true, 00:16:24.126 "data_offset": 0, 00:16:24.126 "data_size": 65536 00:16:24.126 }, 00:16:24.126 { 00:16:24.126 "name": "BaseBdev2", 00:16:24.126 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:24.126 "is_configured": true, 00:16:24.126 "data_offset": 0, 00:16:24.126 "data_size": 65536 00:16:24.126 }, 00:16:24.126 { 00:16:24.126 "name": "BaseBdev3", 00:16:24.126 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:24.126 "is_configured": true, 00:16:24.126 "data_offset": 0, 00:16:24.126 "data_size": 65536 00:16:24.126 }, 00:16:24.126 { 00:16:24.126 "name": "BaseBdev4", 00:16:24.126 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:24.126 "is_configured": true, 00:16:24.126 "data_offset": 0, 00:16:24.126 "data_size": 65536 00:16:24.126 } 00:16:24.126 ] 00:16:24.126 }' 00:16:24.127 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.127 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.127 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.127 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.127 23:50:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.065 "name": "raid_bdev1", 00:16:25.065 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:25.065 "strip_size_kb": 64, 00:16:25.065 "state": "online", 00:16:25.065 "raid_level": "raid5f", 00:16:25.065 "superblock": false, 00:16:25.065 "num_base_bdevs": 4, 00:16:25.065 "num_base_bdevs_discovered": 4, 00:16:25.065 "num_base_bdevs_operational": 4, 00:16:25.065 "process": { 00:16:25.065 "type": "rebuild", 00:16:25.065 "target": "spare", 00:16:25.065 "progress": { 00:16:25.065 "blocks": 195840, 00:16:25.065 "percent": 99 00:16:25.065 } 00:16:25.065 }, 00:16:25.065 "base_bdevs_list": [ 00:16:25.065 { 00:16:25.065 "name": "spare", 00:16:25.065 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:25.065 "is_configured": true, 00:16:25.065 "data_offset": 0, 00:16:25.065 "data_size": 65536 00:16:25.065 }, 00:16:25.065 { 00:16:25.065 "name": "BaseBdev2", 00:16:25.065 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:25.065 "is_configured": true, 00:16:25.065 "data_offset": 0, 00:16:25.065 "data_size": 65536 00:16:25.065 }, 00:16:25.065 { 00:16:25.065 "name": "BaseBdev3", 00:16:25.065 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:25.065 "is_configured": true, 00:16:25.065 "data_offset": 0, 00:16:25.065 "data_size": 65536 00:16:25.065 }, 00:16:25.065 { 00:16:25.065 "name": "BaseBdev4", 00:16:25.065 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:25.065 "is_configured": true, 00:16:25.065 "data_offset": 0, 00:16:25.065 "data_size": 65536 00:16:25.065 } 00:16:25.065 ] 00:16:25.065 }' 00:16:25.065 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.065 [2024-12-06 23:50:36.614230] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:25.065 [2024-12-06 23:50:36.614345] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:25.065 [2024-12-06 23:50:36.614409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.326 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.326 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.326 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.326 23:50:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.267 "name": "raid_bdev1", 00:16:26.267 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:26.267 "strip_size_kb": 64, 00:16:26.267 "state": "online", 00:16:26.267 "raid_level": "raid5f", 00:16:26.267 "superblock": false, 00:16:26.267 "num_base_bdevs": 4, 00:16:26.267 "num_base_bdevs_discovered": 4, 00:16:26.267 "num_base_bdevs_operational": 4, 00:16:26.267 "base_bdevs_list": [ 00:16:26.267 { 00:16:26.267 "name": "spare", 00:16:26.267 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:26.267 "is_configured": true, 00:16:26.267 "data_offset": 0, 00:16:26.267 "data_size": 65536 00:16:26.267 }, 00:16:26.267 { 00:16:26.267 "name": "BaseBdev2", 00:16:26.267 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:26.267 "is_configured": true, 00:16:26.267 "data_offset": 0, 00:16:26.267 "data_size": 65536 00:16:26.267 }, 00:16:26.267 { 00:16:26.267 "name": "BaseBdev3", 00:16:26.267 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:26.267 "is_configured": true, 00:16:26.267 "data_offset": 0, 00:16:26.267 "data_size": 65536 00:16:26.267 }, 00:16:26.267 { 00:16:26.267 "name": "BaseBdev4", 00:16:26.267 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:26.267 "is_configured": true, 00:16:26.267 "data_offset": 0, 00:16:26.267 "data_size": 65536 00:16:26.267 } 00:16:26.267 ] 00:16:26.267 }' 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:26.267 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.528 "name": "raid_bdev1", 00:16:26.528 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:26.528 "strip_size_kb": 64, 00:16:26.528 "state": "online", 00:16:26.528 "raid_level": "raid5f", 00:16:26.528 "superblock": false, 00:16:26.528 "num_base_bdevs": 4, 00:16:26.528 "num_base_bdevs_discovered": 4, 00:16:26.528 "num_base_bdevs_operational": 4, 00:16:26.528 "base_bdevs_list": [ 00:16:26.528 { 00:16:26.528 "name": "spare", 00:16:26.528 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:26.528 "is_configured": true, 00:16:26.528 "data_offset": 0, 00:16:26.528 "data_size": 65536 00:16:26.528 }, 00:16:26.528 { 00:16:26.528 "name": "BaseBdev2", 00:16:26.528 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:26.528 "is_configured": true, 00:16:26.528 "data_offset": 0, 00:16:26.528 "data_size": 65536 00:16:26.528 }, 00:16:26.528 { 00:16:26.528 "name": "BaseBdev3", 00:16:26.528 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:26.528 "is_configured": true, 00:16:26.528 "data_offset": 0, 00:16:26.528 "data_size": 65536 00:16:26.528 }, 00:16:26.528 { 00:16:26.528 "name": "BaseBdev4", 00:16:26.528 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:26.528 "is_configured": true, 00:16:26.528 "data_offset": 0, 00:16:26.528 "data_size": 65536 00:16:26.528 } 00:16:26.528 ] 00:16:26.528 }' 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.528 23:50:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.528 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.528 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.528 "name": "raid_bdev1", 00:16:26.528 "uuid": "abf588b5-6321-451f-91c0-b09dfe09f8a1", 00:16:26.528 "strip_size_kb": 64, 00:16:26.528 "state": "online", 00:16:26.528 "raid_level": "raid5f", 00:16:26.528 "superblock": false, 00:16:26.528 "num_base_bdevs": 4, 00:16:26.528 "num_base_bdevs_discovered": 4, 00:16:26.528 "num_base_bdevs_operational": 4, 00:16:26.528 "base_bdevs_list": [ 00:16:26.528 { 00:16:26.528 "name": "spare", 00:16:26.528 "uuid": "9846a231-aef6-532b-acf7-7329b1e81837", 00:16:26.528 "is_configured": true, 00:16:26.528 "data_offset": 0, 00:16:26.528 "data_size": 65536 00:16:26.528 }, 00:16:26.528 { 00:16:26.528 "name": "BaseBdev2", 00:16:26.528 "uuid": "e1c305dd-fa40-5f84-9cbc-7a43e7e1fd9d", 00:16:26.528 "is_configured": true, 00:16:26.528 "data_offset": 0, 00:16:26.528 "data_size": 65536 00:16:26.528 }, 00:16:26.528 { 00:16:26.528 "name": "BaseBdev3", 00:16:26.528 "uuid": "c576defa-df10-5438-ae8d-300799c10e03", 00:16:26.528 "is_configured": true, 00:16:26.528 "data_offset": 0, 00:16:26.528 "data_size": 65536 00:16:26.528 }, 00:16:26.528 { 00:16:26.528 "name": "BaseBdev4", 00:16:26.528 "uuid": "0b163daf-bbf3-5f56-8e60-15e86b858cb3", 00:16:26.528 "is_configured": true, 00:16:26.528 "data_offset": 0, 00:16:26.528 "data_size": 65536 00:16:26.528 } 00:16:26.528 ] 00:16:26.528 }' 00:16:26.528 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.528 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.099 [2024-12-06 23:50:38.469020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.099 [2024-12-06 23:50:38.469106] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.099 [2024-12-06 23:50:38.469237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.099 [2024-12-06 23:50:38.469352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.099 [2024-12-06 23:50:38.469405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:27.099 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:27.359 /dev/nbd0 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:27.359 1+0 records in 00:16:27.359 1+0 records out 00:16:27.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507393 s, 8.1 MB/s 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:27.359 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:27.620 /dev/nbd1 00:16:27.620 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:27.620 23:50:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:27.620 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:27.620 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:27.620 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:27.620 23:50:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:27.620 1+0 records in 00:16:27.620 1+0 records out 00:16:27.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473858 s, 8.6 MB/s 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:27.620 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.880 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84507 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84507 ']' 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84507 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84507 00:16:28.141 killing process with pid 84507 00:16:28.141 Received shutdown signal, test time was about 60.000000 seconds 00:16:28.141 00:16:28.141 Latency(us) 00:16:28.141 [2024-12-06T23:50:39.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.141 [2024-12-06T23:50:39.704Z] =================================================================================================================== 00:16:28.141 [2024-12-06T23:50:39.704Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84507' 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84507 00:16:28.141 [2024-12-06 23:50:39.682416] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.141 23:50:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84507 00:16:28.711 [2024-12-06 23:50:40.141254] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.652 23:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:29.652 00:16:29.652 real 0m20.147s 00:16:29.652 user 0m24.075s 00:16:29.652 sys 0m2.396s 00:16:29.652 ************************************ 00:16:29.652 END TEST raid5f_rebuild_test 00:16:29.652 ************************************ 00:16:29.652 23:50:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.652 23:50:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.913 23:50:41 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:29.913 23:50:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:29.913 23:50:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.913 23:50:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.913 ************************************ 00:16:29.913 START TEST raid5f_rebuild_test_sb 00:16:29.913 ************************************ 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85029 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85029 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85029 ']' 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.913 23:50:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.913 [2024-12-06 23:50:41.371163] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:16:29.913 [2024-12-06 23:50:41.371340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:29.913 Zero copy mechanism will not be used. 00:16:29.913 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85029 ] 00:16:30.173 [2024-12-06 23:50:41.551179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.173 [2024-12-06 23:50:41.654746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.433 [2024-12-06 23:50:41.844576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.433 [2024-12-06 23:50:41.844728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.694 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.694 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:30.694 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.694 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:30.694 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.694 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.694 BaseBdev1_malloc 00:16:30.694 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.694 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:30.694 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.694 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.694 [2024-12-06 23:50:42.253240] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:30.694 [2024-12-06 23:50:42.253310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.694 [2024-12-06 23:50:42.253349] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:30.694 [2024-12-06 23:50:42.253360] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.694 [2024-12-06 23:50:42.255369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.694 [2024-12-06 23:50:42.255411] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:30.955 BaseBdev1 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.955 BaseBdev2_malloc 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.955 [2024-12-06 23:50:42.302987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:30.955 [2024-12-06 23:50:42.303078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.955 [2024-12-06 23:50:42.303099] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:30.955 [2024-12-06 23:50:42.303110] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.955 [2024-12-06 23:50:42.305089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.955 [2024-12-06 23:50:42.305183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:30.955 BaseBdev2 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.955 BaseBdev3_malloc 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.955 [2024-12-06 23:50:42.388427] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:30.955 [2024-12-06 23:50:42.388501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.955 [2024-12-06 23:50:42.388524] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:30.955 [2024-12-06 23:50:42.388535] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.955 [2024-12-06 23:50:42.390594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.955 [2024-12-06 23:50:42.390637] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:30.955 BaseBdev3 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.955 BaseBdev4_malloc 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.955 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.955 [2024-12-06 23:50:42.441302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:30.955 [2024-12-06 23:50:42.441378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.955 [2024-12-06 23:50:42.441398] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:30.955 [2024-12-06 23:50:42.441408] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.955 [2024-12-06 23:50:42.443342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.956 [2024-12-06 23:50:42.443448] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:30.956 BaseBdev4 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.956 spare_malloc 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.956 spare_delay 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.956 [2024-12-06 23:50:42.506649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:30.956 [2024-12-06 23:50:42.506731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.956 [2024-12-06 23:50:42.506747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:30.956 [2024-12-06 23:50:42.506757] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.956 [2024-12-06 23:50:42.508737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.956 [2024-12-06 23:50:42.508828] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:30.956 spare 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.956 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.217 [2024-12-06 23:50:42.518710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.217 [2024-12-06 23:50:42.520438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.217 [2024-12-06 23:50:42.520553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.217 [2024-12-06 23:50:42.520610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:31.217 [2024-12-06 23:50:42.520819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:31.217 [2024-12-06 23:50:42.520834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:31.217 [2024-12-06 23:50:42.521066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:31.217 [2024-12-06 23:50:42.527965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:31.217 [2024-12-06 23:50:42.527987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:31.217 [2024-12-06 23:50:42.528192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.217 "name": "raid_bdev1", 00:16:31.217 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:31.217 "strip_size_kb": 64, 00:16:31.217 "state": "online", 00:16:31.217 "raid_level": "raid5f", 00:16:31.217 "superblock": true, 00:16:31.217 "num_base_bdevs": 4, 00:16:31.217 "num_base_bdevs_discovered": 4, 00:16:31.217 "num_base_bdevs_operational": 4, 00:16:31.217 "base_bdevs_list": [ 00:16:31.217 { 00:16:31.217 "name": "BaseBdev1", 00:16:31.217 "uuid": "b8fcaef6-337b-563c-b6d5-527abe87a3ec", 00:16:31.217 "is_configured": true, 00:16:31.217 "data_offset": 2048, 00:16:31.217 "data_size": 63488 00:16:31.217 }, 00:16:31.217 { 00:16:31.217 "name": "BaseBdev2", 00:16:31.217 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:31.217 "is_configured": true, 00:16:31.217 "data_offset": 2048, 00:16:31.217 "data_size": 63488 00:16:31.217 }, 00:16:31.217 { 00:16:31.217 "name": "BaseBdev3", 00:16:31.217 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:31.217 "is_configured": true, 00:16:31.217 "data_offset": 2048, 00:16:31.217 "data_size": 63488 00:16:31.217 }, 00:16:31.217 { 00:16:31.217 "name": "BaseBdev4", 00:16:31.217 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:31.217 "is_configured": true, 00:16:31.217 "data_offset": 2048, 00:16:31.217 "data_size": 63488 00:16:31.217 } 00:16:31.217 ] 00:16:31.217 }' 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.217 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.479 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.479 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:31.479 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.479 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.479 [2024-12-06 23:50:42.955845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.479 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.479 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:31.479 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.479 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.479 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.479 23:50:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.479 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:31.740 [2024-12-06 23:50:43.219282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:31.740 /dev/nbd0 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.740 1+0 records in 00:16:31.740 1+0 records out 00:16:31.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035153 s, 11.7 MB/s 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:31.740 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:32.310 496+0 records in 00:16:32.310 496+0 records out 00:16:32.310 97517568 bytes (98 MB, 93 MiB) copied, 0.435672 s, 224 MB/s 00:16:32.310 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:32.310 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.310 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:32.310 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:32.310 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:32.310 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.310 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:32.570 [2024-12-06 23:50:43.901180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.570 [2024-12-06 23:50:43.946078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:32.570 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.571 23:50:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.571 "name": "raid_bdev1", 00:16:32.571 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:32.571 "strip_size_kb": 64, 00:16:32.571 "state": "online", 00:16:32.571 "raid_level": "raid5f", 00:16:32.571 "superblock": true, 00:16:32.571 "num_base_bdevs": 4, 00:16:32.571 "num_base_bdevs_discovered": 3, 00:16:32.571 "num_base_bdevs_operational": 3, 00:16:32.571 "base_bdevs_list": [ 00:16:32.571 { 00:16:32.571 "name": null, 00:16:32.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.571 "is_configured": false, 00:16:32.571 "data_offset": 0, 00:16:32.571 "data_size": 63488 00:16:32.571 }, 00:16:32.571 { 00:16:32.571 "name": "BaseBdev2", 00:16:32.571 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:32.571 "is_configured": true, 00:16:32.571 "data_offset": 2048, 00:16:32.571 "data_size": 63488 00:16:32.571 }, 00:16:32.571 { 00:16:32.571 "name": "BaseBdev3", 00:16:32.571 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:32.571 "is_configured": true, 00:16:32.571 "data_offset": 2048, 00:16:32.571 "data_size": 63488 00:16:32.571 }, 00:16:32.571 { 00:16:32.571 "name": "BaseBdev4", 00:16:32.571 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:32.571 "is_configured": true, 00:16:32.571 "data_offset": 2048, 00:16:32.571 "data_size": 63488 00:16:32.571 } 00:16:32.571 ] 00:16:32.571 }' 00:16:32.571 23:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.571 23:50:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.831 23:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:32.831 23:50:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.831 23:50:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.831 [2024-12-06 23:50:44.365365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.831 [2024-12-06 23:50:44.381333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:32.831 23:50:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.831 23:50:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:32.831 [2024-12-06 23:50:44.390910] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:34.211 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.211 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.211 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.211 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.211 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.211 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.211 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.211 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.211 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.211 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.211 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.211 "name": "raid_bdev1", 00:16:34.211 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:34.211 "strip_size_kb": 64, 00:16:34.211 "state": "online", 00:16:34.211 "raid_level": "raid5f", 00:16:34.211 "superblock": true, 00:16:34.211 "num_base_bdevs": 4, 00:16:34.211 "num_base_bdevs_discovered": 4, 00:16:34.211 "num_base_bdevs_operational": 4, 00:16:34.211 "process": { 00:16:34.211 "type": "rebuild", 00:16:34.211 "target": "spare", 00:16:34.212 "progress": { 00:16:34.212 "blocks": 19200, 00:16:34.212 "percent": 10 00:16:34.212 } 00:16:34.212 }, 00:16:34.212 "base_bdevs_list": [ 00:16:34.212 { 00:16:34.212 "name": "spare", 00:16:34.212 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:34.212 "is_configured": true, 00:16:34.212 "data_offset": 2048, 00:16:34.212 "data_size": 63488 00:16:34.212 }, 00:16:34.212 { 00:16:34.212 "name": "BaseBdev2", 00:16:34.212 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:34.212 "is_configured": true, 00:16:34.212 "data_offset": 2048, 00:16:34.212 "data_size": 63488 00:16:34.212 }, 00:16:34.212 { 00:16:34.212 "name": "BaseBdev3", 00:16:34.212 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:34.212 "is_configured": true, 00:16:34.212 "data_offset": 2048, 00:16:34.212 "data_size": 63488 00:16:34.212 }, 00:16:34.212 { 00:16:34.212 "name": "BaseBdev4", 00:16:34.212 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:34.212 "is_configured": true, 00:16:34.212 "data_offset": 2048, 00:16:34.212 "data_size": 63488 00:16:34.212 } 00:16:34.212 ] 00:16:34.212 }' 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.212 [2024-12-06 23:50:45.541749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.212 [2024-12-06 23:50:45.596790] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:34.212 [2024-12-06 23:50:45.596852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.212 [2024-12-06 23:50:45.596869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.212 [2024-12-06 23:50:45.596878] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.212 "name": "raid_bdev1", 00:16:34.212 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:34.212 "strip_size_kb": 64, 00:16:34.212 "state": "online", 00:16:34.212 "raid_level": "raid5f", 00:16:34.212 "superblock": true, 00:16:34.212 "num_base_bdevs": 4, 00:16:34.212 "num_base_bdevs_discovered": 3, 00:16:34.212 "num_base_bdevs_operational": 3, 00:16:34.212 "base_bdevs_list": [ 00:16:34.212 { 00:16:34.212 "name": null, 00:16:34.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.212 "is_configured": false, 00:16:34.212 "data_offset": 0, 00:16:34.212 "data_size": 63488 00:16:34.212 }, 00:16:34.212 { 00:16:34.212 "name": "BaseBdev2", 00:16:34.212 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:34.212 "is_configured": true, 00:16:34.212 "data_offset": 2048, 00:16:34.212 "data_size": 63488 00:16:34.212 }, 00:16:34.212 { 00:16:34.212 "name": "BaseBdev3", 00:16:34.212 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:34.212 "is_configured": true, 00:16:34.212 "data_offset": 2048, 00:16:34.212 "data_size": 63488 00:16:34.212 }, 00:16:34.212 { 00:16:34.212 "name": "BaseBdev4", 00:16:34.212 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:34.212 "is_configured": true, 00:16:34.212 "data_offset": 2048, 00:16:34.212 "data_size": 63488 00:16:34.212 } 00:16:34.212 ] 00:16:34.212 }' 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.212 23:50:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.780 "name": "raid_bdev1", 00:16:34.780 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:34.780 "strip_size_kb": 64, 00:16:34.780 "state": "online", 00:16:34.780 "raid_level": "raid5f", 00:16:34.780 "superblock": true, 00:16:34.780 "num_base_bdevs": 4, 00:16:34.780 "num_base_bdevs_discovered": 3, 00:16:34.780 "num_base_bdevs_operational": 3, 00:16:34.780 "base_bdevs_list": [ 00:16:34.780 { 00:16:34.780 "name": null, 00:16:34.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.780 "is_configured": false, 00:16:34.780 "data_offset": 0, 00:16:34.780 "data_size": 63488 00:16:34.780 }, 00:16:34.780 { 00:16:34.780 "name": "BaseBdev2", 00:16:34.780 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:34.780 "is_configured": true, 00:16:34.780 "data_offset": 2048, 00:16:34.780 "data_size": 63488 00:16:34.780 }, 00:16:34.780 { 00:16:34.780 "name": "BaseBdev3", 00:16:34.780 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:34.780 "is_configured": true, 00:16:34.780 "data_offset": 2048, 00:16:34.780 "data_size": 63488 00:16:34.780 }, 00:16:34.780 { 00:16:34.780 "name": "BaseBdev4", 00:16:34.780 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:34.780 "is_configured": true, 00:16:34.780 "data_offset": 2048, 00:16:34.780 "data_size": 63488 00:16:34.780 } 00:16:34.780 ] 00:16:34.780 }' 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.780 [2024-12-06 23:50:46.184968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.780 [2024-12-06 23:50:46.199256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.780 23:50:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:34.780 [2024-12-06 23:50:46.208138] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:35.717 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.717 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.717 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.717 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.717 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.717 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.717 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.717 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.718 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.718 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.718 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.718 "name": "raid_bdev1", 00:16:35.718 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:35.718 "strip_size_kb": 64, 00:16:35.718 "state": "online", 00:16:35.718 "raid_level": "raid5f", 00:16:35.718 "superblock": true, 00:16:35.718 "num_base_bdevs": 4, 00:16:35.718 "num_base_bdevs_discovered": 4, 00:16:35.718 "num_base_bdevs_operational": 4, 00:16:35.718 "process": { 00:16:35.718 "type": "rebuild", 00:16:35.718 "target": "spare", 00:16:35.718 "progress": { 00:16:35.718 "blocks": 19200, 00:16:35.718 "percent": 10 00:16:35.718 } 00:16:35.718 }, 00:16:35.718 "base_bdevs_list": [ 00:16:35.718 { 00:16:35.718 "name": "spare", 00:16:35.718 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:35.718 "is_configured": true, 00:16:35.718 "data_offset": 2048, 00:16:35.718 "data_size": 63488 00:16:35.718 }, 00:16:35.718 { 00:16:35.718 "name": "BaseBdev2", 00:16:35.718 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:35.718 "is_configured": true, 00:16:35.718 "data_offset": 2048, 00:16:35.718 "data_size": 63488 00:16:35.718 }, 00:16:35.718 { 00:16:35.718 "name": "BaseBdev3", 00:16:35.718 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:35.718 "is_configured": true, 00:16:35.718 "data_offset": 2048, 00:16:35.718 "data_size": 63488 00:16:35.718 }, 00:16:35.718 { 00:16:35.718 "name": "BaseBdev4", 00:16:35.718 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:35.718 "is_configured": true, 00:16:35.718 "data_offset": 2048, 00:16:35.718 "data_size": 63488 00:16:35.718 } 00:16:35.718 ] 00:16:35.718 }' 00:16:35.718 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:35.977 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=636 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.977 "name": "raid_bdev1", 00:16:35.977 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:35.977 "strip_size_kb": 64, 00:16:35.977 "state": "online", 00:16:35.977 "raid_level": "raid5f", 00:16:35.977 "superblock": true, 00:16:35.977 "num_base_bdevs": 4, 00:16:35.977 "num_base_bdevs_discovered": 4, 00:16:35.977 "num_base_bdevs_operational": 4, 00:16:35.977 "process": { 00:16:35.977 "type": "rebuild", 00:16:35.977 "target": "spare", 00:16:35.977 "progress": { 00:16:35.977 "blocks": 21120, 00:16:35.977 "percent": 11 00:16:35.977 } 00:16:35.977 }, 00:16:35.977 "base_bdevs_list": [ 00:16:35.977 { 00:16:35.977 "name": "spare", 00:16:35.977 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:35.977 "is_configured": true, 00:16:35.977 "data_offset": 2048, 00:16:35.977 "data_size": 63488 00:16:35.977 }, 00:16:35.977 { 00:16:35.977 "name": "BaseBdev2", 00:16:35.977 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:35.977 "is_configured": true, 00:16:35.977 "data_offset": 2048, 00:16:35.977 "data_size": 63488 00:16:35.977 }, 00:16:35.977 { 00:16:35.977 "name": "BaseBdev3", 00:16:35.977 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:35.977 "is_configured": true, 00:16:35.977 "data_offset": 2048, 00:16:35.977 "data_size": 63488 00:16:35.977 }, 00:16:35.977 { 00:16:35.977 "name": "BaseBdev4", 00:16:35.977 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:35.977 "is_configured": true, 00:16:35.977 "data_offset": 2048, 00:16:35.977 "data_size": 63488 00:16:35.977 } 00:16:35.977 ] 00:16:35.977 }' 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.977 23:50:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.359 "name": "raid_bdev1", 00:16:37.359 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:37.359 "strip_size_kb": 64, 00:16:37.359 "state": "online", 00:16:37.359 "raid_level": "raid5f", 00:16:37.359 "superblock": true, 00:16:37.359 "num_base_bdevs": 4, 00:16:37.359 "num_base_bdevs_discovered": 4, 00:16:37.359 "num_base_bdevs_operational": 4, 00:16:37.359 "process": { 00:16:37.359 "type": "rebuild", 00:16:37.359 "target": "spare", 00:16:37.359 "progress": { 00:16:37.359 "blocks": 42240, 00:16:37.359 "percent": 22 00:16:37.359 } 00:16:37.359 }, 00:16:37.359 "base_bdevs_list": [ 00:16:37.359 { 00:16:37.359 "name": "spare", 00:16:37.359 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:37.359 "is_configured": true, 00:16:37.359 "data_offset": 2048, 00:16:37.359 "data_size": 63488 00:16:37.359 }, 00:16:37.359 { 00:16:37.359 "name": "BaseBdev2", 00:16:37.359 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:37.359 "is_configured": true, 00:16:37.359 "data_offset": 2048, 00:16:37.359 "data_size": 63488 00:16:37.359 }, 00:16:37.359 { 00:16:37.359 "name": "BaseBdev3", 00:16:37.359 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:37.359 "is_configured": true, 00:16:37.359 "data_offset": 2048, 00:16:37.359 "data_size": 63488 00:16:37.359 }, 00:16:37.359 { 00:16:37.359 "name": "BaseBdev4", 00:16:37.359 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:37.359 "is_configured": true, 00:16:37.359 "data_offset": 2048, 00:16:37.359 "data_size": 63488 00:16:37.359 } 00:16:37.359 ] 00:16:37.359 }' 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.359 23:50:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.301 "name": "raid_bdev1", 00:16:38.301 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:38.301 "strip_size_kb": 64, 00:16:38.301 "state": "online", 00:16:38.301 "raid_level": "raid5f", 00:16:38.301 "superblock": true, 00:16:38.301 "num_base_bdevs": 4, 00:16:38.301 "num_base_bdevs_discovered": 4, 00:16:38.301 "num_base_bdevs_operational": 4, 00:16:38.301 "process": { 00:16:38.301 "type": "rebuild", 00:16:38.301 "target": "spare", 00:16:38.301 "progress": { 00:16:38.301 "blocks": 65280, 00:16:38.301 "percent": 34 00:16:38.301 } 00:16:38.301 }, 00:16:38.301 "base_bdevs_list": [ 00:16:38.301 { 00:16:38.301 "name": "spare", 00:16:38.301 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:38.301 "is_configured": true, 00:16:38.301 "data_offset": 2048, 00:16:38.301 "data_size": 63488 00:16:38.301 }, 00:16:38.301 { 00:16:38.301 "name": "BaseBdev2", 00:16:38.301 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:38.301 "is_configured": true, 00:16:38.301 "data_offset": 2048, 00:16:38.301 "data_size": 63488 00:16:38.301 }, 00:16:38.301 { 00:16:38.301 "name": "BaseBdev3", 00:16:38.301 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:38.301 "is_configured": true, 00:16:38.301 "data_offset": 2048, 00:16:38.301 "data_size": 63488 00:16:38.301 }, 00:16:38.301 { 00:16:38.301 "name": "BaseBdev4", 00:16:38.301 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:38.301 "is_configured": true, 00:16:38.301 "data_offset": 2048, 00:16:38.301 "data_size": 63488 00:16:38.301 } 00:16:38.301 ] 00:16:38.301 }' 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.301 23:50:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.243 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.243 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.243 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.243 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.243 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.243 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.504 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.504 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.504 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.504 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.504 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.504 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.504 "name": "raid_bdev1", 00:16:39.504 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:39.504 "strip_size_kb": 64, 00:16:39.504 "state": "online", 00:16:39.504 "raid_level": "raid5f", 00:16:39.504 "superblock": true, 00:16:39.504 "num_base_bdevs": 4, 00:16:39.504 "num_base_bdevs_discovered": 4, 00:16:39.504 "num_base_bdevs_operational": 4, 00:16:39.504 "process": { 00:16:39.504 "type": "rebuild", 00:16:39.504 "target": "spare", 00:16:39.504 "progress": { 00:16:39.504 "blocks": 86400, 00:16:39.504 "percent": 45 00:16:39.504 } 00:16:39.504 }, 00:16:39.504 "base_bdevs_list": [ 00:16:39.504 { 00:16:39.504 "name": "spare", 00:16:39.504 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:39.504 "is_configured": true, 00:16:39.504 "data_offset": 2048, 00:16:39.504 "data_size": 63488 00:16:39.504 }, 00:16:39.504 { 00:16:39.504 "name": "BaseBdev2", 00:16:39.504 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:39.504 "is_configured": true, 00:16:39.504 "data_offset": 2048, 00:16:39.504 "data_size": 63488 00:16:39.504 }, 00:16:39.504 { 00:16:39.504 "name": "BaseBdev3", 00:16:39.504 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:39.504 "is_configured": true, 00:16:39.504 "data_offset": 2048, 00:16:39.504 "data_size": 63488 00:16:39.504 }, 00:16:39.504 { 00:16:39.504 "name": "BaseBdev4", 00:16:39.504 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:39.504 "is_configured": true, 00:16:39.504 "data_offset": 2048, 00:16:39.504 "data_size": 63488 00:16:39.504 } 00:16:39.504 ] 00:16:39.504 }' 00:16:39.504 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.504 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.504 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.504 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.504 23:50:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.445 "name": "raid_bdev1", 00:16:40.445 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:40.445 "strip_size_kb": 64, 00:16:40.445 "state": "online", 00:16:40.445 "raid_level": "raid5f", 00:16:40.445 "superblock": true, 00:16:40.445 "num_base_bdevs": 4, 00:16:40.445 "num_base_bdevs_discovered": 4, 00:16:40.445 "num_base_bdevs_operational": 4, 00:16:40.445 "process": { 00:16:40.445 "type": "rebuild", 00:16:40.445 "target": "spare", 00:16:40.445 "progress": { 00:16:40.445 "blocks": 109440, 00:16:40.445 "percent": 57 00:16:40.445 } 00:16:40.445 }, 00:16:40.445 "base_bdevs_list": [ 00:16:40.445 { 00:16:40.445 "name": "spare", 00:16:40.445 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:40.445 "is_configured": true, 00:16:40.445 "data_offset": 2048, 00:16:40.445 "data_size": 63488 00:16:40.445 }, 00:16:40.445 { 00:16:40.445 "name": "BaseBdev2", 00:16:40.445 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:40.445 "is_configured": true, 00:16:40.445 "data_offset": 2048, 00:16:40.445 "data_size": 63488 00:16:40.445 }, 00:16:40.445 { 00:16:40.445 "name": "BaseBdev3", 00:16:40.445 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:40.445 "is_configured": true, 00:16:40.445 "data_offset": 2048, 00:16:40.445 "data_size": 63488 00:16:40.445 }, 00:16:40.445 { 00:16:40.445 "name": "BaseBdev4", 00:16:40.445 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:40.445 "is_configured": true, 00:16:40.445 "data_offset": 2048, 00:16:40.445 "data_size": 63488 00:16:40.445 } 00:16:40.445 ] 00:16:40.445 }' 00:16:40.445 23:50:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.705 23:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.705 23:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.705 23:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.705 23:50:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:41.646 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.646 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.646 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.646 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.646 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.646 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.646 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.647 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.647 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.647 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.647 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.647 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.647 "name": "raid_bdev1", 00:16:41.647 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:41.647 "strip_size_kb": 64, 00:16:41.647 "state": "online", 00:16:41.647 "raid_level": "raid5f", 00:16:41.647 "superblock": true, 00:16:41.647 "num_base_bdevs": 4, 00:16:41.647 "num_base_bdevs_discovered": 4, 00:16:41.647 "num_base_bdevs_operational": 4, 00:16:41.647 "process": { 00:16:41.647 "type": "rebuild", 00:16:41.647 "target": "spare", 00:16:41.647 "progress": { 00:16:41.647 "blocks": 130560, 00:16:41.647 "percent": 68 00:16:41.647 } 00:16:41.647 }, 00:16:41.647 "base_bdevs_list": [ 00:16:41.647 { 00:16:41.647 "name": "spare", 00:16:41.647 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:41.647 "is_configured": true, 00:16:41.647 "data_offset": 2048, 00:16:41.647 "data_size": 63488 00:16:41.647 }, 00:16:41.647 { 00:16:41.647 "name": "BaseBdev2", 00:16:41.647 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:41.647 "is_configured": true, 00:16:41.647 "data_offset": 2048, 00:16:41.647 "data_size": 63488 00:16:41.647 }, 00:16:41.647 { 00:16:41.647 "name": "BaseBdev3", 00:16:41.647 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:41.647 "is_configured": true, 00:16:41.647 "data_offset": 2048, 00:16:41.647 "data_size": 63488 00:16:41.647 }, 00:16:41.647 { 00:16:41.647 "name": "BaseBdev4", 00:16:41.647 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:41.647 "is_configured": true, 00:16:41.647 "data_offset": 2048, 00:16:41.647 "data_size": 63488 00:16:41.647 } 00:16:41.647 ] 00:16:41.647 }' 00:16:41.647 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.647 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.647 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.907 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.907 23:50:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:42.849 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.849 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.849 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.849 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.849 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.849 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.849 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.849 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.849 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.849 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.849 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.849 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.849 "name": "raid_bdev1", 00:16:42.849 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:42.849 "strip_size_kb": 64, 00:16:42.849 "state": "online", 00:16:42.849 "raid_level": "raid5f", 00:16:42.849 "superblock": true, 00:16:42.849 "num_base_bdevs": 4, 00:16:42.849 "num_base_bdevs_discovered": 4, 00:16:42.849 "num_base_bdevs_operational": 4, 00:16:42.849 "process": { 00:16:42.849 "type": "rebuild", 00:16:42.849 "target": "spare", 00:16:42.849 "progress": { 00:16:42.849 "blocks": 153600, 00:16:42.849 "percent": 80 00:16:42.849 } 00:16:42.849 }, 00:16:42.849 "base_bdevs_list": [ 00:16:42.849 { 00:16:42.849 "name": "spare", 00:16:42.849 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:42.850 "is_configured": true, 00:16:42.850 "data_offset": 2048, 00:16:42.850 "data_size": 63488 00:16:42.850 }, 00:16:42.850 { 00:16:42.850 "name": "BaseBdev2", 00:16:42.850 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:42.850 "is_configured": true, 00:16:42.850 "data_offset": 2048, 00:16:42.850 "data_size": 63488 00:16:42.850 }, 00:16:42.850 { 00:16:42.850 "name": "BaseBdev3", 00:16:42.850 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:42.850 "is_configured": true, 00:16:42.850 "data_offset": 2048, 00:16:42.850 "data_size": 63488 00:16:42.850 }, 00:16:42.850 { 00:16:42.850 "name": "BaseBdev4", 00:16:42.850 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:42.850 "is_configured": true, 00:16:42.850 "data_offset": 2048, 00:16:42.850 "data_size": 63488 00:16:42.850 } 00:16:42.850 ] 00:16:42.850 }' 00:16:42.850 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.850 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.850 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.850 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.850 23:50:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.230 "name": "raid_bdev1", 00:16:44.230 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:44.230 "strip_size_kb": 64, 00:16:44.230 "state": "online", 00:16:44.230 "raid_level": "raid5f", 00:16:44.230 "superblock": true, 00:16:44.230 "num_base_bdevs": 4, 00:16:44.230 "num_base_bdevs_discovered": 4, 00:16:44.230 "num_base_bdevs_operational": 4, 00:16:44.230 "process": { 00:16:44.230 "type": "rebuild", 00:16:44.230 "target": "spare", 00:16:44.230 "progress": { 00:16:44.230 "blocks": 174720, 00:16:44.230 "percent": 91 00:16:44.230 } 00:16:44.230 }, 00:16:44.230 "base_bdevs_list": [ 00:16:44.230 { 00:16:44.230 "name": "spare", 00:16:44.230 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:44.230 "is_configured": true, 00:16:44.230 "data_offset": 2048, 00:16:44.230 "data_size": 63488 00:16:44.230 }, 00:16:44.230 { 00:16:44.230 "name": "BaseBdev2", 00:16:44.230 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:44.230 "is_configured": true, 00:16:44.230 "data_offset": 2048, 00:16:44.230 "data_size": 63488 00:16:44.230 }, 00:16:44.230 { 00:16:44.230 "name": "BaseBdev3", 00:16:44.230 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:44.230 "is_configured": true, 00:16:44.230 "data_offset": 2048, 00:16:44.230 "data_size": 63488 00:16:44.230 }, 00:16:44.230 { 00:16:44.230 "name": "BaseBdev4", 00:16:44.230 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:44.230 "is_configured": true, 00:16:44.230 "data_offset": 2048, 00:16:44.230 "data_size": 63488 00:16:44.230 } 00:16:44.230 ] 00:16:44.230 }' 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.230 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.231 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.231 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.231 23:50:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:44.800 [2024-12-06 23:50:56.250989] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:44.800 [2024-12-06 23:50:56.251055] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:44.800 [2024-12-06 23:50:56.251171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.060 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.060 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.061 "name": "raid_bdev1", 00:16:45.061 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:45.061 "strip_size_kb": 64, 00:16:45.061 "state": "online", 00:16:45.061 "raid_level": "raid5f", 00:16:45.061 "superblock": true, 00:16:45.061 "num_base_bdevs": 4, 00:16:45.061 "num_base_bdevs_discovered": 4, 00:16:45.061 "num_base_bdevs_operational": 4, 00:16:45.061 "base_bdevs_list": [ 00:16:45.061 { 00:16:45.061 "name": "spare", 00:16:45.061 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:45.061 "is_configured": true, 00:16:45.061 "data_offset": 2048, 00:16:45.061 "data_size": 63488 00:16:45.061 }, 00:16:45.061 { 00:16:45.061 "name": "BaseBdev2", 00:16:45.061 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:45.061 "is_configured": true, 00:16:45.061 "data_offset": 2048, 00:16:45.061 "data_size": 63488 00:16:45.061 }, 00:16:45.061 { 00:16:45.061 "name": "BaseBdev3", 00:16:45.061 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:45.061 "is_configured": true, 00:16:45.061 "data_offset": 2048, 00:16:45.061 "data_size": 63488 00:16:45.061 }, 00:16:45.061 { 00:16:45.061 "name": "BaseBdev4", 00:16:45.061 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:45.061 "is_configured": true, 00:16:45.061 "data_offset": 2048, 00:16:45.061 "data_size": 63488 00:16:45.061 } 00:16:45.061 ] 00:16:45.061 }' 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:45.061 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.321 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.321 "name": "raid_bdev1", 00:16:45.321 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:45.321 "strip_size_kb": 64, 00:16:45.321 "state": "online", 00:16:45.321 "raid_level": "raid5f", 00:16:45.321 "superblock": true, 00:16:45.321 "num_base_bdevs": 4, 00:16:45.321 "num_base_bdevs_discovered": 4, 00:16:45.321 "num_base_bdevs_operational": 4, 00:16:45.321 "base_bdevs_list": [ 00:16:45.321 { 00:16:45.321 "name": "spare", 00:16:45.321 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:45.321 "is_configured": true, 00:16:45.321 "data_offset": 2048, 00:16:45.322 "data_size": 63488 00:16:45.322 }, 00:16:45.322 { 00:16:45.322 "name": "BaseBdev2", 00:16:45.322 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:45.322 "is_configured": true, 00:16:45.322 "data_offset": 2048, 00:16:45.322 "data_size": 63488 00:16:45.322 }, 00:16:45.322 { 00:16:45.322 "name": "BaseBdev3", 00:16:45.322 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:45.322 "is_configured": true, 00:16:45.322 "data_offset": 2048, 00:16:45.322 "data_size": 63488 00:16:45.322 }, 00:16:45.322 { 00:16:45.322 "name": "BaseBdev4", 00:16:45.322 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:45.322 "is_configured": true, 00:16:45.322 "data_offset": 2048, 00:16:45.322 "data_size": 63488 00:16:45.322 } 00:16:45.322 ] 00:16:45.322 }' 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.322 "name": "raid_bdev1", 00:16:45.322 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:45.322 "strip_size_kb": 64, 00:16:45.322 "state": "online", 00:16:45.322 "raid_level": "raid5f", 00:16:45.322 "superblock": true, 00:16:45.322 "num_base_bdevs": 4, 00:16:45.322 "num_base_bdevs_discovered": 4, 00:16:45.322 "num_base_bdevs_operational": 4, 00:16:45.322 "base_bdevs_list": [ 00:16:45.322 { 00:16:45.322 "name": "spare", 00:16:45.322 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:45.322 "is_configured": true, 00:16:45.322 "data_offset": 2048, 00:16:45.322 "data_size": 63488 00:16:45.322 }, 00:16:45.322 { 00:16:45.322 "name": "BaseBdev2", 00:16:45.322 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:45.322 "is_configured": true, 00:16:45.322 "data_offset": 2048, 00:16:45.322 "data_size": 63488 00:16:45.322 }, 00:16:45.322 { 00:16:45.322 "name": "BaseBdev3", 00:16:45.322 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:45.322 "is_configured": true, 00:16:45.322 "data_offset": 2048, 00:16:45.322 "data_size": 63488 00:16:45.322 }, 00:16:45.322 { 00:16:45.322 "name": "BaseBdev4", 00:16:45.322 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:45.322 "is_configured": true, 00:16:45.322 "data_offset": 2048, 00:16:45.322 "data_size": 63488 00:16:45.322 } 00:16:45.322 ] 00:16:45.322 }' 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.322 23:50:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.893 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:45.893 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.893 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.893 [2024-12-06 23:50:57.246512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.893 [2024-12-06 23:50:57.246544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.893 [2024-12-06 23:50:57.246615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.893 [2024-12-06 23:50:57.246716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.894 [2024-12-06 23:50:57.246738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:45.894 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:46.154 /dev/nbd0 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.154 1+0 records in 00:16:46.154 1+0 records out 00:16:46.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612494 s, 6.7 MB/s 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:46.154 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:46.415 /dev/nbd1 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.415 1+0 records in 00:16:46.415 1+0 records out 00:16:46.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460488 s, 8.9 MB/s 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:46.415 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:46.676 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.676 23:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:46.676 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:46.676 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:46.676 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:46.676 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.676 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.676 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:46.676 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:46.676 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.676 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.676 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.937 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.937 [2024-12-06 23:50:58.391213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:46.937 [2024-12-06 23:50:58.391313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.938 [2024-12-06 23:50:58.391352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:46.938 [2024-12-06 23:50:58.391380] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.938 [2024-12-06 23:50:58.393656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.938 [2024-12-06 23:50:58.393743] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:46.938 [2024-12-06 23:50:58.393879] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:46.938 [2024-12-06 23:50:58.393965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.938 [2024-12-06 23:50:58.394148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.938 [2024-12-06 23:50:58.394279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.938 [2024-12-06 23:50:58.394404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:46.938 spare 00:16:46.938 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.938 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:46.938 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.938 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.938 [2024-12-06 23:50:58.494340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:46.938 [2024-12-06 23:50:58.494410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:46.938 [2024-12-06 23:50:58.494704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:47.198 [2024-12-06 23:50:58.501527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:47.198 [2024-12-06 23:50:58.501584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:47.198 [2024-12-06 23:50:58.501818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.198 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.199 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.199 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.199 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.199 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.199 "name": "raid_bdev1", 00:16:47.199 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:47.199 "strip_size_kb": 64, 00:16:47.199 "state": "online", 00:16:47.199 "raid_level": "raid5f", 00:16:47.199 "superblock": true, 00:16:47.199 "num_base_bdevs": 4, 00:16:47.199 "num_base_bdevs_discovered": 4, 00:16:47.199 "num_base_bdevs_operational": 4, 00:16:47.199 "base_bdevs_list": [ 00:16:47.199 { 00:16:47.199 "name": "spare", 00:16:47.199 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:47.199 "is_configured": true, 00:16:47.199 "data_offset": 2048, 00:16:47.199 "data_size": 63488 00:16:47.199 }, 00:16:47.199 { 00:16:47.199 "name": "BaseBdev2", 00:16:47.199 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:47.199 "is_configured": true, 00:16:47.199 "data_offset": 2048, 00:16:47.199 "data_size": 63488 00:16:47.199 }, 00:16:47.199 { 00:16:47.199 "name": "BaseBdev3", 00:16:47.199 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:47.199 "is_configured": true, 00:16:47.199 "data_offset": 2048, 00:16:47.199 "data_size": 63488 00:16:47.199 }, 00:16:47.199 { 00:16:47.199 "name": "BaseBdev4", 00:16:47.199 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:47.199 "is_configured": true, 00:16:47.199 "data_offset": 2048, 00:16:47.199 "data_size": 63488 00:16:47.199 } 00:16:47.199 ] 00:16:47.199 }' 00:16:47.199 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.199 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.459 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.459 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.459 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.459 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.459 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.459 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.459 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.459 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.459 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.459 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.459 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.459 "name": "raid_bdev1", 00:16:47.459 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:47.459 "strip_size_kb": 64, 00:16:47.459 "state": "online", 00:16:47.459 "raid_level": "raid5f", 00:16:47.459 "superblock": true, 00:16:47.459 "num_base_bdevs": 4, 00:16:47.459 "num_base_bdevs_discovered": 4, 00:16:47.459 "num_base_bdevs_operational": 4, 00:16:47.459 "base_bdevs_list": [ 00:16:47.459 { 00:16:47.459 "name": "spare", 00:16:47.459 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:47.459 "is_configured": true, 00:16:47.459 "data_offset": 2048, 00:16:47.459 "data_size": 63488 00:16:47.459 }, 00:16:47.459 { 00:16:47.459 "name": "BaseBdev2", 00:16:47.459 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:47.459 "is_configured": true, 00:16:47.459 "data_offset": 2048, 00:16:47.459 "data_size": 63488 00:16:47.460 }, 00:16:47.460 { 00:16:47.460 "name": "BaseBdev3", 00:16:47.460 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:47.460 "is_configured": true, 00:16:47.460 "data_offset": 2048, 00:16:47.460 "data_size": 63488 00:16:47.460 }, 00:16:47.460 { 00:16:47.460 "name": "BaseBdev4", 00:16:47.460 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:47.460 "is_configured": true, 00:16:47.460 "data_offset": 2048, 00:16:47.460 "data_size": 63488 00:16:47.460 } 00:16:47.460 ] 00:16:47.460 }' 00:16:47.460 23:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 [2024-12-06 23:50:59.133423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.720 "name": "raid_bdev1", 00:16:47.720 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:47.720 "strip_size_kb": 64, 00:16:47.720 "state": "online", 00:16:47.720 "raid_level": "raid5f", 00:16:47.720 "superblock": true, 00:16:47.720 "num_base_bdevs": 4, 00:16:47.720 "num_base_bdevs_discovered": 3, 00:16:47.720 "num_base_bdevs_operational": 3, 00:16:47.720 "base_bdevs_list": [ 00:16:47.720 { 00:16:47.720 "name": null, 00:16:47.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.720 "is_configured": false, 00:16:47.720 "data_offset": 0, 00:16:47.720 "data_size": 63488 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "name": "BaseBdev2", 00:16:47.720 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:47.720 "is_configured": true, 00:16:47.720 "data_offset": 2048, 00:16:47.720 "data_size": 63488 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "name": "BaseBdev3", 00:16:47.720 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:47.720 "is_configured": true, 00:16:47.720 "data_offset": 2048, 00:16:47.720 "data_size": 63488 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "name": "BaseBdev4", 00:16:47.720 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:47.720 "is_configured": true, 00:16:47.720 "data_offset": 2048, 00:16:47.720 "data_size": 63488 00:16:47.720 } 00:16:47.720 ] 00:16:47.720 }' 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.720 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.291 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:48.291 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.291 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.291 [2024-12-06 23:50:59.588695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.291 [2024-12-06 23:50:59.588911] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:48.291 [2024-12-06 23:50:59.588978] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:48.291 [2024-12-06 23:50:59.589035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.291 [2024-12-06 23:50:59.603067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:48.291 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.291 23:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:48.291 [2024-12-06 23:50:59.611698] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.233 "name": "raid_bdev1", 00:16:49.233 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:49.233 "strip_size_kb": 64, 00:16:49.233 "state": "online", 00:16:49.233 "raid_level": "raid5f", 00:16:49.233 "superblock": true, 00:16:49.233 "num_base_bdevs": 4, 00:16:49.233 "num_base_bdevs_discovered": 4, 00:16:49.233 "num_base_bdevs_operational": 4, 00:16:49.233 "process": { 00:16:49.233 "type": "rebuild", 00:16:49.233 "target": "spare", 00:16:49.233 "progress": { 00:16:49.233 "blocks": 19200, 00:16:49.233 "percent": 10 00:16:49.233 } 00:16:49.233 }, 00:16:49.233 "base_bdevs_list": [ 00:16:49.233 { 00:16:49.233 "name": "spare", 00:16:49.233 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:49.233 "is_configured": true, 00:16:49.233 "data_offset": 2048, 00:16:49.233 "data_size": 63488 00:16:49.233 }, 00:16:49.233 { 00:16:49.233 "name": "BaseBdev2", 00:16:49.233 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:49.233 "is_configured": true, 00:16:49.233 "data_offset": 2048, 00:16:49.233 "data_size": 63488 00:16:49.233 }, 00:16:49.233 { 00:16:49.233 "name": "BaseBdev3", 00:16:49.233 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:49.233 "is_configured": true, 00:16:49.233 "data_offset": 2048, 00:16:49.233 "data_size": 63488 00:16:49.233 }, 00:16:49.233 { 00:16:49.233 "name": "BaseBdev4", 00:16:49.233 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:49.233 "is_configured": true, 00:16:49.233 "data_offset": 2048, 00:16:49.233 "data_size": 63488 00:16:49.233 } 00:16:49.233 ] 00:16:49.233 }' 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.233 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.234 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.234 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.234 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:49.234 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.234 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.234 [2024-12-06 23:51:00.746511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.495 [2024-12-06 23:51:00.817464] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:49.495 [2024-12-06 23:51:00.817530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.495 [2024-12-06 23:51:00.817546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.495 [2024-12-06 23:51:00.817555] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.495 "name": "raid_bdev1", 00:16:49.495 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:49.495 "strip_size_kb": 64, 00:16:49.495 "state": "online", 00:16:49.495 "raid_level": "raid5f", 00:16:49.495 "superblock": true, 00:16:49.495 "num_base_bdevs": 4, 00:16:49.495 "num_base_bdevs_discovered": 3, 00:16:49.495 "num_base_bdevs_operational": 3, 00:16:49.495 "base_bdevs_list": [ 00:16:49.495 { 00:16:49.495 "name": null, 00:16:49.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.495 "is_configured": false, 00:16:49.495 "data_offset": 0, 00:16:49.495 "data_size": 63488 00:16:49.495 }, 00:16:49.495 { 00:16:49.495 "name": "BaseBdev2", 00:16:49.495 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:49.495 "is_configured": true, 00:16:49.495 "data_offset": 2048, 00:16:49.495 "data_size": 63488 00:16:49.495 }, 00:16:49.495 { 00:16:49.495 "name": "BaseBdev3", 00:16:49.495 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:49.495 "is_configured": true, 00:16:49.495 "data_offset": 2048, 00:16:49.495 "data_size": 63488 00:16:49.495 }, 00:16:49.495 { 00:16:49.495 "name": "BaseBdev4", 00:16:49.495 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:49.495 "is_configured": true, 00:16:49.495 "data_offset": 2048, 00:16:49.495 "data_size": 63488 00:16:49.495 } 00:16:49.495 ] 00:16:49.495 }' 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.495 23:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.755 23:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:49.755 23:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.755 23:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.755 [2024-12-06 23:51:01.277672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:49.755 [2024-12-06 23:51:01.277784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.755 [2024-12-06 23:51:01.277827] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:49.755 [2024-12-06 23:51:01.277858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.755 [2024-12-06 23:51:01.278357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.755 [2024-12-06 23:51:01.278419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:49.755 [2024-12-06 23:51:01.278536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:49.755 [2024-12-06 23:51:01.278580] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:49.755 [2024-12-06 23:51:01.278620] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:49.755 [2024-12-06 23:51:01.278705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:49.755 [2024-12-06 23:51:01.293269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:49.755 spare 00:16:49.755 23:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.755 23:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:49.755 [2024-12-06 23:51:01.302105] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:51.136 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.136 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.136 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.136 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.136 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.136 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.136 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.136 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.136 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.136 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.136 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.136 "name": "raid_bdev1", 00:16:51.136 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:51.136 "strip_size_kb": 64, 00:16:51.136 "state": "online", 00:16:51.136 "raid_level": "raid5f", 00:16:51.136 "superblock": true, 00:16:51.136 "num_base_bdevs": 4, 00:16:51.136 "num_base_bdevs_discovered": 4, 00:16:51.136 "num_base_bdevs_operational": 4, 00:16:51.136 "process": { 00:16:51.136 "type": "rebuild", 00:16:51.136 "target": "spare", 00:16:51.136 "progress": { 00:16:51.136 "blocks": 19200, 00:16:51.136 "percent": 10 00:16:51.137 } 00:16:51.137 }, 00:16:51.137 "base_bdevs_list": [ 00:16:51.137 { 00:16:51.137 "name": "spare", 00:16:51.137 "uuid": "d5df9f6f-4b8e-5829-b9c6-84d52ad40e5d", 00:16:51.137 "is_configured": true, 00:16:51.137 "data_offset": 2048, 00:16:51.137 "data_size": 63488 00:16:51.137 }, 00:16:51.137 { 00:16:51.137 "name": "BaseBdev2", 00:16:51.137 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:51.137 "is_configured": true, 00:16:51.137 "data_offset": 2048, 00:16:51.137 "data_size": 63488 00:16:51.137 }, 00:16:51.137 { 00:16:51.137 "name": "BaseBdev3", 00:16:51.137 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:51.137 "is_configured": true, 00:16:51.137 "data_offset": 2048, 00:16:51.137 "data_size": 63488 00:16:51.137 }, 00:16:51.137 { 00:16:51.137 "name": "BaseBdev4", 00:16:51.137 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:51.137 "is_configured": true, 00:16:51.137 "data_offset": 2048, 00:16:51.137 "data_size": 63488 00:16:51.137 } 00:16:51.137 ] 00:16:51.137 }' 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.137 [2024-12-06 23:51:02.453053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.137 [2024-12-06 23:51:02.508053] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:51.137 [2024-12-06 23:51:02.508122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.137 [2024-12-06 23:51:02.508159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.137 [2024-12-06 23:51:02.508167] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.137 "name": "raid_bdev1", 00:16:51.137 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:51.137 "strip_size_kb": 64, 00:16:51.137 "state": "online", 00:16:51.137 "raid_level": "raid5f", 00:16:51.137 "superblock": true, 00:16:51.137 "num_base_bdevs": 4, 00:16:51.137 "num_base_bdevs_discovered": 3, 00:16:51.137 "num_base_bdevs_operational": 3, 00:16:51.137 "base_bdevs_list": [ 00:16:51.137 { 00:16:51.137 "name": null, 00:16:51.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.137 "is_configured": false, 00:16:51.137 "data_offset": 0, 00:16:51.137 "data_size": 63488 00:16:51.137 }, 00:16:51.137 { 00:16:51.137 "name": "BaseBdev2", 00:16:51.137 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:51.137 "is_configured": true, 00:16:51.137 "data_offset": 2048, 00:16:51.137 "data_size": 63488 00:16:51.137 }, 00:16:51.137 { 00:16:51.137 "name": "BaseBdev3", 00:16:51.137 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:51.137 "is_configured": true, 00:16:51.137 "data_offset": 2048, 00:16:51.137 "data_size": 63488 00:16:51.137 }, 00:16:51.137 { 00:16:51.137 "name": "BaseBdev4", 00:16:51.137 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:51.137 "is_configured": true, 00:16:51.137 "data_offset": 2048, 00:16:51.137 "data_size": 63488 00:16:51.137 } 00:16:51.137 ] 00:16:51.137 }' 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.137 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.707 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.707 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.707 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.707 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.707 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.707 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.707 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.707 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.707 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.707 23:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.707 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.707 "name": "raid_bdev1", 00:16:51.707 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:51.707 "strip_size_kb": 64, 00:16:51.707 "state": "online", 00:16:51.707 "raid_level": "raid5f", 00:16:51.707 "superblock": true, 00:16:51.707 "num_base_bdevs": 4, 00:16:51.707 "num_base_bdevs_discovered": 3, 00:16:51.707 "num_base_bdevs_operational": 3, 00:16:51.707 "base_bdevs_list": [ 00:16:51.707 { 00:16:51.707 "name": null, 00:16:51.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.707 "is_configured": false, 00:16:51.707 "data_offset": 0, 00:16:51.707 "data_size": 63488 00:16:51.707 }, 00:16:51.708 { 00:16:51.708 "name": "BaseBdev2", 00:16:51.708 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:51.708 "is_configured": true, 00:16:51.708 "data_offset": 2048, 00:16:51.708 "data_size": 63488 00:16:51.708 }, 00:16:51.708 { 00:16:51.708 "name": "BaseBdev3", 00:16:51.708 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:51.708 "is_configured": true, 00:16:51.708 "data_offset": 2048, 00:16:51.708 "data_size": 63488 00:16:51.708 }, 00:16:51.708 { 00:16:51.708 "name": "BaseBdev4", 00:16:51.708 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:51.708 "is_configured": true, 00:16:51.708 "data_offset": 2048, 00:16:51.708 "data_size": 63488 00:16:51.708 } 00:16:51.708 ] 00:16:51.708 }' 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.708 [2024-12-06 23:51:03.124713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:51.708 [2024-12-06 23:51:03.124808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.708 [2024-12-06 23:51:03.124850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:51.708 [2024-12-06 23:51:03.124859] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.708 [2024-12-06 23:51:03.125315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.708 [2024-12-06 23:51:03.125343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:51.708 [2024-12-06 23:51:03.125420] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:51.708 [2024-12-06 23:51:03.125434] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:51.708 [2024-12-06 23:51:03.125446] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:51.708 [2024-12-06 23:51:03.125456] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:51.708 BaseBdev1 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.708 23:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.650 "name": "raid_bdev1", 00:16:52.650 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:52.650 "strip_size_kb": 64, 00:16:52.650 "state": "online", 00:16:52.650 "raid_level": "raid5f", 00:16:52.650 "superblock": true, 00:16:52.650 "num_base_bdevs": 4, 00:16:52.650 "num_base_bdevs_discovered": 3, 00:16:52.650 "num_base_bdevs_operational": 3, 00:16:52.650 "base_bdevs_list": [ 00:16:52.650 { 00:16:52.650 "name": null, 00:16:52.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.650 "is_configured": false, 00:16:52.650 "data_offset": 0, 00:16:52.650 "data_size": 63488 00:16:52.650 }, 00:16:52.650 { 00:16:52.650 "name": "BaseBdev2", 00:16:52.650 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:52.650 "is_configured": true, 00:16:52.650 "data_offset": 2048, 00:16:52.650 "data_size": 63488 00:16:52.650 }, 00:16:52.650 { 00:16:52.650 "name": "BaseBdev3", 00:16:52.650 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:52.650 "is_configured": true, 00:16:52.650 "data_offset": 2048, 00:16:52.650 "data_size": 63488 00:16:52.650 }, 00:16:52.650 { 00:16:52.650 "name": "BaseBdev4", 00:16:52.650 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:52.650 "is_configured": true, 00:16:52.650 "data_offset": 2048, 00:16:52.650 "data_size": 63488 00:16:52.650 } 00:16:52.650 ] 00:16:52.650 }' 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.650 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.261 "name": "raid_bdev1", 00:16:53.261 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:53.261 "strip_size_kb": 64, 00:16:53.261 "state": "online", 00:16:53.261 "raid_level": "raid5f", 00:16:53.261 "superblock": true, 00:16:53.261 "num_base_bdevs": 4, 00:16:53.261 "num_base_bdevs_discovered": 3, 00:16:53.261 "num_base_bdevs_operational": 3, 00:16:53.261 "base_bdevs_list": [ 00:16:53.261 { 00:16:53.261 "name": null, 00:16:53.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.261 "is_configured": false, 00:16:53.261 "data_offset": 0, 00:16:53.261 "data_size": 63488 00:16:53.261 }, 00:16:53.261 { 00:16:53.261 "name": "BaseBdev2", 00:16:53.261 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:53.261 "is_configured": true, 00:16:53.261 "data_offset": 2048, 00:16:53.261 "data_size": 63488 00:16:53.261 }, 00:16:53.261 { 00:16:53.261 "name": "BaseBdev3", 00:16:53.261 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:53.261 "is_configured": true, 00:16:53.261 "data_offset": 2048, 00:16:53.261 "data_size": 63488 00:16:53.261 }, 00:16:53.261 { 00:16:53.261 "name": "BaseBdev4", 00:16:53.261 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:53.261 "is_configured": true, 00:16:53.261 "data_offset": 2048, 00:16:53.261 "data_size": 63488 00:16:53.261 } 00:16:53.261 ] 00:16:53.261 }' 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.261 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.261 [2024-12-06 23:51:04.746031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.261 [2024-12-06 23:51:04.746199] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:53.261 [2024-12-06 23:51:04.746214] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:53.262 request: 00:16:53.262 { 00:16:53.262 "base_bdev": "BaseBdev1", 00:16:53.262 "raid_bdev": "raid_bdev1", 00:16:53.262 "method": "bdev_raid_add_base_bdev", 00:16:53.262 "req_id": 1 00:16:53.262 } 00:16:53.262 Got JSON-RPC error response 00:16:53.262 response: 00:16:53.262 { 00:16:53.262 "code": -22, 00:16:53.262 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:53.262 } 00:16:53.262 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:53.262 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:53.262 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.262 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.262 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.262 23:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.213 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.472 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.472 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.472 "name": "raid_bdev1", 00:16:54.472 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:54.472 "strip_size_kb": 64, 00:16:54.472 "state": "online", 00:16:54.472 "raid_level": "raid5f", 00:16:54.472 "superblock": true, 00:16:54.472 "num_base_bdevs": 4, 00:16:54.472 "num_base_bdevs_discovered": 3, 00:16:54.472 "num_base_bdevs_operational": 3, 00:16:54.472 "base_bdevs_list": [ 00:16:54.472 { 00:16:54.472 "name": null, 00:16:54.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.472 "is_configured": false, 00:16:54.472 "data_offset": 0, 00:16:54.472 "data_size": 63488 00:16:54.472 }, 00:16:54.472 { 00:16:54.472 "name": "BaseBdev2", 00:16:54.472 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:54.472 "is_configured": true, 00:16:54.472 "data_offset": 2048, 00:16:54.472 "data_size": 63488 00:16:54.472 }, 00:16:54.472 { 00:16:54.472 "name": "BaseBdev3", 00:16:54.472 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:54.472 "is_configured": true, 00:16:54.472 "data_offset": 2048, 00:16:54.472 "data_size": 63488 00:16:54.472 }, 00:16:54.472 { 00:16:54.472 "name": "BaseBdev4", 00:16:54.472 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:54.472 "is_configured": true, 00:16:54.472 "data_offset": 2048, 00:16:54.472 "data_size": 63488 00:16:54.472 } 00:16:54.472 ] 00:16:54.472 }' 00:16:54.472 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.472 23:51:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.731 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.731 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.731 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.731 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.731 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.731 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.731 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.731 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.731 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.731 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.731 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.731 "name": "raid_bdev1", 00:16:54.731 "uuid": "e2acbede-ffb0-4ee4-9ce5-e714933a3fd0", 00:16:54.731 "strip_size_kb": 64, 00:16:54.731 "state": "online", 00:16:54.731 "raid_level": "raid5f", 00:16:54.731 "superblock": true, 00:16:54.731 "num_base_bdevs": 4, 00:16:54.731 "num_base_bdevs_discovered": 3, 00:16:54.731 "num_base_bdevs_operational": 3, 00:16:54.731 "base_bdevs_list": [ 00:16:54.731 { 00:16:54.731 "name": null, 00:16:54.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.731 "is_configured": false, 00:16:54.731 "data_offset": 0, 00:16:54.731 "data_size": 63488 00:16:54.731 }, 00:16:54.731 { 00:16:54.731 "name": "BaseBdev2", 00:16:54.731 "uuid": "b4d6ba97-cc59-5635-b31f-a2232669ecb1", 00:16:54.731 "is_configured": true, 00:16:54.731 "data_offset": 2048, 00:16:54.731 "data_size": 63488 00:16:54.731 }, 00:16:54.731 { 00:16:54.731 "name": "BaseBdev3", 00:16:54.731 "uuid": "1803971c-3885-5268-890f-493ab03a3283", 00:16:54.731 "is_configured": true, 00:16:54.731 "data_offset": 2048, 00:16:54.731 "data_size": 63488 00:16:54.731 }, 00:16:54.731 { 00:16:54.731 "name": "BaseBdev4", 00:16:54.731 "uuid": "2c1a6032-9487-5575-9504-df0933b889ef", 00:16:54.731 "is_configured": true, 00:16:54.731 "data_offset": 2048, 00:16:54.731 "data_size": 63488 00:16:54.731 } 00:16:54.731 ] 00:16:54.731 }' 00:16:54.731 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.990 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:54.990 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.990 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:54.990 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85029 00:16:54.990 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85029 ']' 00:16:54.990 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85029 00:16:54.990 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:54.990 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.990 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85029 00:16:54.990 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.990 killing process with pid 85029 00:16:54.990 Received shutdown signal, test time was about 60.000000 seconds 00:16:54.990 00:16:54.990 Latency(us) 00:16:54.990 [2024-12-06T23:51:06.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.990 [2024-12-06T23:51:06.553Z] =================================================================================================================== 00:16:54.990 [2024-12-06T23:51:06.554Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:54.991 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.991 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85029' 00:16:54.991 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85029 00:16:54.991 [2024-12-06 23:51:06.386580] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.991 [2024-12-06 23:51:06.386704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.991 [2024-12-06 23:51:06.386778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.991 [2024-12-06 23:51:06.386791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:54.991 23:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85029 00:16:55.559 [2024-12-06 23:51:06.840208] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.497 23:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:56.497 00:16:56.497 real 0m26.630s 00:16:56.497 user 0m33.327s 00:16:56.497 sys 0m2.982s 00:16:56.497 23:51:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.497 ************************************ 00:16:56.497 END TEST raid5f_rebuild_test_sb 00:16:56.497 ************************************ 00:16:56.497 23:51:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.497 23:51:07 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:56.497 23:51:07 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:56.497 23:51:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:56.497 23:51:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.497 23:51:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.497 ************************************ 00:16:56.498 START TEST raid_state_function_test_sb_4k 00:16:56.498 ************************************ 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85834 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85834' 00:16:56.498 Process raid pid: 85834 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85834 00:16:56.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85834 ']' 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.498 23:51:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.757 [2024-12-06 23:51:08.071172] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:16:56.757 [2024-12-06 23:51:08.071285] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.757 [2024-12-06 23:51:08.243829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.017 [2024-12-06 23:51:08.347416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.017 [2024-12-06 23:51:08.548792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.017 [2024-12-06 23:51:08.548906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.586 [2024-12-06 23:51:08.902372] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:57.586 [2024-12-06 23:51:08.902429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:57.586 [2024-12-06 23:51:08.902438] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:57.586 [2024-12-06 23:51:08.902447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.586 "name": "Existed_Raid", 00:16:57.586 "uuid": "3509541a-a990-4625-afa5-ba2f999886fd", 00:16:57.586 "strip_size_kb": 0, 00:16:57.586 "state": "configuring", 00:16:57.586 "raid_level": "raid1", 00:16:57.586 "superblock": true, 00:16:57.586 "num_base_bdevs": 2, 00:16:57.586 "num_base_bdevs_discovered": 0, 00:16:57.586 "num_base_bdevs_operational": 2, 00:16:57.586 "base_bdevs_list": [ 00:16:57.586 { 00:16:57.586 "name": "BaseBdev1", 00:16:57.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.586 "is_configured": false, 00:16:57.586 "data_offset": 0, 00:16:57.586 "data_size": 0 00:16:57.586 }, 00:16:57.586 { 00:16:57.586 "name": "BaseBdev2", 00:16:57.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.586 "is_configured": false, 00:16:57.586 "data_offset": 0, 00:16:57.586 "data_size": 0 00:16:57.586 } 00:16:57.586 ] 00:16:57.586 }' 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.586 23:51:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.846 [2024-12-06 23:51:09.325577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:57.846 [2024-12-06 23:51:09.325654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.846 [2024-12-06 23:51:09.337566] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:57.846 [2024-12-06 23:51:09.337672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:57.846 [2024-12-06 23:51:09.337703] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:57.846 [2024-12-06 23:51:09.337728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.846 [2024-12-06 23:51:09.385737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.846 BaseBdev1 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.846 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.106 [ 00:16:58.106 { 00:16:58.106 "name": "BaseBdev1", 00:16:58.106 "aliases": [ 00:16:58.106 "54de2f1e-2faf-462a-984d-666030f3d11f" 00:16:58.106 ], 00:16:58.106 "product_name": "Malloc disk", 00:16:58.106 "block_size": 4096, 00:16:58.106 "num_blocks": 8192, 00:16:58.106 "uuid": "54de2f1e-2faf-462a-984d-666030f3d11f", 00:16:58.106 "assigned_rate_limits": { 00:16:58.106 "rw_ios_per_sec": 0, 00:16:58.106 "rw_mbytes_per_sec": 0, 00:16:58.106 "r_mbytes_per_sec": 0, 00:16:58.106 "w_mbytes_per_sec": 0 00:16:58.106 }, 00:16:58.106 "claimed": true, 00:16:58.106 "claim_type": "exclusive_write", 00:16:58.106 "zoned": false, 00:16:58.106 "supported_io_types": { 00:16:58.106 "read": true, 00:16:58.106 "write": true, 00:16:58.106 "unmap": true, 00:16:58.106 "flush": true, 00:16:58.106 "reset": true, 00:16:58.106 "nvme_admin": false, 00:16:58.106 "nvme_io": false, 00:16:58.106 "nvme_io_md": false, 00:16:58.106 "write_zeroes": true, 00:16:58.106 "zcopy": true, 00:16:58.106 "get_zone_info": false, 00:16:58.106 "zone_management": false, 00:16:58.106 "zone_append": false, 00:16:58.106 "compare": false, 00:16:58.106 "compare_and_write": false, 00:16:58.106 "abort": true, 00:16:58.106 "seek_hole": false, 00:16:58.106 "seek_data": false, 00:16:58.106 "copy": true, 00:16:58.106 "nvme_iov_md": false 00:16:58.106 }, 00:16:58.106 "memory_domains": [ 00:16:58.106 { 00:16:58.106 "dma_device_id": "system", 00:16:58.106 "dma_device_type": 1 00:16:58.106 }, 00:16:58.106 { 00:16:58.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.106 "dma_device_type": 2 00:16:58.106 } 00:16:58.106 ], 00:16:58.106 "driver_specific": {} 00:16:58.106 } 00:16:58.106 ] 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.106 "name": "Existed_Raid", 00:16:58.106 "uuid": "26aa1160-7c53-47bf-aff0-9c734834feef", 00:16:58.106 "strip_size_kb": 0, 00:16:58.106 "state": "configuring", 00:16:58.106 "raid_level": "raid1", 00:16:58.106 "superblock": true, 00:16:58.106 "num_base_bdevs": 2, 00:16:58.106 "num_base_bdevs_discovered": 1, 00:16:58.106 "num_base_bdevs_operational": 2, 00:16:58.106 "base_bdevs_list": [ 00:16:58.106 { 00:16:58.106 "name": "BaseBdev1", 00:16:58.106 "uuid": "54de2f1e-2faf-462a-984d-666030f3d11f", 00:16:58.106 "is_configured": true, 00:16:58.106 "data_offset": 256, 00:16:58.106 "data_size": 7936 00:16:58.106 }, 00:16:58.106 { 00:16:58.106 "name": "BaseBdev2", 00:16:58.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.106 "is_configured": false, 00:16:58.106 "data_offset": 0, 00:16:58.106 "data_size": 0 00:16:58.106 } 00:16:58.106 ] 00:16:58.106 }' 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.106 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.366 [2024-12-06 23:51:09.840954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.366 [2024-12-06 23:51:09.841051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.366 [2024-12-06 23:51:09.852981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.366 [2024-12-06 23:51:09.854905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:58.366 [2024-12-06 23:51:09.854976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.366 "name": "Existed_Raid", 00:16:58.366 "uuid": "1b8c5907-edb6-4a60-ac06-4d626bb51d66", 00:16:58.366 "strip_size_kb": 0, 00:16:58.366 "state": "configuring", 00:16:58.366 "raid_level": "raid1", 00:16:58.366 "superblock": true, 00:16:58.366 "num_base_bdevs": 2, 00:16:58.366 "num_base_bdevs_discovered": 1, 00:16:58.366 "num_base_bdevs_operational": 2, 00:16:58.366 "base_bdevs_list": [ 00:16:58.366 { 00:16:58.366 "name": "BaseBdev1", 00:16:58.366 "uuid": "54de2f1e-2faf-462a-984d-666030f3d11f", 00:16:58.366 "is_configured": true, 00:16:58.366 "data_offset": 256, 00:16:58.366 "data_size": 7936 00:16:58.366 }, 00:16:58.366 { 00:16:58.366 "name": "BaseBdev2", 00:16:58.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.366 "is_configured": false, 00:16:58.366 "data_offset": 0, 00:16:58.366 "data_size": 0 00:16:58.366 } 00:16:58.366 ] 00:16:58.366 }' 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.366 23:51:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.937 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:58.937 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.937 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.937 [2024-12-06 23:51:10.327355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.937 [2024-12-06 23:51:10.327626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:58.937 [2024-12-06 23:51:10.327647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:58.937 [2024-12-06 23:51:10.327921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:58.937 [2024-12-06 23:51:10.328109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:58.937 [2024-12-06 23:51:10.328124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:58.937 BaseBdev2 00:16:58.937 [2024-12-06 23:51:10.328268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.938 [ 00:16:58.938 { 00:16:58.938 "name": "BaseBdev2", 00:16:58.938 "aliases": [ 00:16:58.938 "f523117e-e632-4d05-b4a3-1c3861793867" 00:16:58.938 ], 00:16:58.938 "product_name": "Malloc disk", 00:16:58.938 "block_size": 4096, 00:16:58.938 "num_blocks": 8192, 00:16:58.938 "uuid": "f523117e-e632-4d05-b4a3-1c3861793867", 00:16:58.938 "assigned_rate_limits": { 00:16:58.938 "rw_ios_per_sec": 0, 00:16:58.938 "rw_mbytes_per_sec": 0, 00:16:58.938 "r_mbytes_per_sec": 0, 00:16:58.938 "w_mbytes_per_sec": 0 00:16:58.938 }, 00:16:58.938 "claimed": true, 00:16:58.938 "claim_type": "exclusive_write", 00:16:58.938 "zoned": false, 00:16:58.938 "supported_io_types": { 00:16:58.938 "read": true, 00:16:58.938 "write": true, 00:16:58.938 "unmap": true, 00:16:58.938 "flush": true, 00:16:58.938 "reset": true, 00:16:58.938 "nvme_admin": false, 00:16:58.938 "nvme_io": false, 00:16:58.938 "nvme_io_md": false, 00:16:58.938 "write_zeroes": true, 00:16:58.938 "zcopy": true, 00:16:58.938 "get_zone_info": false, 00:16:58.938 "zone_management": false, 00:16:58.938 "zone_append": false, 00:16:58.938 "compare": false, 00:16:58.938 "compare_and_write": false, 00:16:58.938 "abort": true, 00:16:58.938 "seek_hole": false, 00:16:58.938 "seek_data": false, 00:16:58.938 "copy": true, 00:16:58.938 "nvme_iov_md": false 00:16:58.938 }, 00:16:58.938 "memory_domains": [ 00:16:58.938 { 00:16:58.938 "dma_device_id": "system", 00:16:58.938 "dma_device_type": 1 00:16:58.938 }, 00:16:58.938 { 00:16:58.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.938 "dma_device_type": 2 00:16:58.938 } 00:16:58.938 ], 00:16:58.938 "driver_specific": {} 00:16:58.938 } 00:16:58.938 ] 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.938 "name": "Existed_Raid", 00:16:58.938 "uuid": "1b8c5907-edb6-4a60-ac06-4d626bb51d66", 00:16:58.938 "strip_size_kb": 0, 00:16:58.938 "state": "online", 00:16:58.938 "raid_level": "raid1", 00:16:58.938 "superblock": true, 00:16:58.938 "num_base_bdevs": 2, 00:16:58.938 "num_base_bdevs_discovered": 2, 00:16:58.938 "num_base_bdevs_operational": 2, 00:16:58.938 "base_bdevs_list": [ 00:16:58.938 { 00:16:58.938 "name": "BaseBdev1", 00:16:58.938 "uuid": "54de2f1e-2faf-462a-984d-666030f3d11f", 00:16:58.938 "is_configured": true, 00:16:58.938 "data_offset": 256, 00:16:58.938 "data_size": 7936 00:16:58.938 }, 00:16:58.938 { 00:16:58.938 "name": "BaseBdev2", 00:16:58.938 "uuid": "f523117e-e632-4d05-b4a3-1c3861793867", 00:16:58.938 "is_configured": true, 00:16:58.938 "data_offset": 256, 00:16:58.938 "data_size": 7936 00:16:58.938 } 00:16:58.938 ] 00:16:58.938 }' 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.938 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.199 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:59.199 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:59.199 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:59.199 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:59.199 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:59.199 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:59.199 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:59.199 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:59.199 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.199 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.199 [2024-12-06 23:51:10.746902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.462 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.462 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:59.462 "name": "Existed_Raid", 00:16:59.462 "aliases": [ 00:16:59.462 "1b8c5907-edb6-4a60-ac06-4d626bb51d66" 00:16:59.462 ], 00:16:59.462 "product_name": "Raid Volume", 00:16:59.462 "block_size": 4096, 00:16:59.462 "num_blocks": 7936, 00:16:59.462 "uuid": "1b8c5907-edb6-4a60-ac06-4d626bb51d66", 00:16:59.462 "assigned_rate_limits": { 00:16:59.462 "rw_ios_per_sec": 0, 00:16:59.462 "rw_mbytes_per_sec": 0, 00:16:59.462 "r_mbytes_per_sec": 0, 00:16:59.462 "w_mbytes_per_sec": 0 00:16:59.462 }, 00:16:59.462 "claimed": false, 00:16:59.462 "zoned": false, 00:16:59.462 "supported_io_types": { 00:16:59.462 "read": true, 00:16:59.462 "write": true, 00:16:59.462 "unmap": false, 00:16:59.462 "flush": false, 00:16:59.462 "reset": true, 00:16:59.462 "nvme_admin": false, 00:16:59.462 "nvme_io": false, 00:16:59.462 "nvme_io_md": false, 00:16:59.462 "write_zeroes": true, 00:16:59.462 "zcopy": false, 00:16:59.462 "get_zone_info": false, 00:16:59.462 "zone_management": false, 00:16:59.462 "zone_append": false, 00:16:59.462 "compare": false, 00:16:59.462 "compare_and_write": false, 00:16:59.463 "abort": false, 00:16:59.463 "seek_hole": false, 00:16:59.463 "seek_data": false, 00:16:59.463 "copy": false, 00:16:59.463 "nvme_iov_md": false 00:16:59.463 }, 00:16:59.463 "memory_domains": [ 00:16:59.463 { 00:16:59.463 "dma_device_id": "system", 00:16:59.463 "dma_device_type": 1 00:16:59.463 }, 00:16:59.463 { 00:16:59.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.463 "dma_device_type": 2 00:16:59.463 }, 00:16:59.463 { 00:16:59.463 "dma_device_id": "system", 00:16:59.463 "dma_device_type": 1 00:16:59.463 }, 00:16:59.463 { 00:16:59.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.463 "dma_device_type": 2 00:16:59.463 } 00:16:59.463 ], 00:16:59.463 "driver_specific": { 00:16:59.463 "raid": { 00:16:59.463 "uuid": "1b8c5907-edb6-4a60-ac06-4d626bb51d66", 00:16:59.463 "strip_size_kb": 0, 00:16:59.463 "state": "online", 00:16:59.463 "raid_level": "raid1", 00:16:59.463 "superblock": true, 00:16:59.463 "num_base_bdevs": 2, 00:16:59.463 "num_base_bdevs_discovered": 2, 00:16:59.463 "num_base_bdevs_operational": 2, 00:16:59.463 "base_bdevs_list": [ 00:16:59.463 { 00:16:59.463 "name": "BaseBdev1", 00:16:59.463 "uuid": "54de2f1e-2faf-462a-984d-666030f3d11f", 00:16:59.463 "is_configured": true, 00:16:59.463 "data_offset": 256, 00:16:59.463 "data_size": 7936 00:16:59.463 }, 00:16:59.463 { 00:16:59.463 "name": "BaseBdev2", 00:16:59.463 "uuid": "f523117e-e632-4d05-b4a3-1c3861793867", 00:16:59.463 "is_configured": true, 00:16:59.463 "data_offset": 256, 00:16:59.463 "data_size": 7936 00:16:59.463 } 00:16:59.463 ] 00:16:59.463 } 00:16:59.463 } 00:16:59.463 }' 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:59.463 BaseBdev2' 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.463 23:51:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.463 [2024-12-06 23:51:10.958332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.724 "name": "Existed_Raid", 00:16:59.724 "uuid": "1b8c5907-edb6-4a60-ac06-4d626bb51d66", 00:16:59.724 "strip_size_kb": 0, 00:16:59.724 "state": "online", 00:16:59.724 "raid_level": "raid1", 00:16:59.724 "superblock": true, 00:16:59.724 "num_base_bdevs": 2, 00:16:59.724 "num_base_bdevs_discovered": 1, 00:16:59.724 "num_base_bdevs_operational": 1, 00:16:59.724 "base_bdevs_list": [ 00:16:59.724 { 00:16:59.724 "name": null, 00:16:59.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.724 "is_configured": false, 00:16:59.724 "data_offset": 0, 00:16:59.724 "data_size": 7936 00:16:59.724 }, 00:16:59.724 { 00:16:59.724 "name": "BaseBdev2", 00:16:59.724 "uuid": "f523117e-e632-4d05-b4a3-1c3861793867", 00:16:59.724 "is_configured": true, 00:16:59.724 "data_offset": 256, 00:16:59.724 "data_size": 7936 00:16:59.724 } 00:16:59.724 ] 00:16:59.724 }' 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.724 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.984 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:59.984 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:59.984 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.984 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:59.984 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.984 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.984 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.984 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:59.984 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.984 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:59.984 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.984 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.984 [2024-12-06 23:51:11.518444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:59.984 [2024-12-06 23:51:11.518582] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.245 [2024-12-06 23:51:11.607074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.245 [2024-12-06 23:51:11.607123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.245 [2024-12-06 23:51:11.607134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85834 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85834 ']' 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85834 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85834 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.245 killing process with pid 85834 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85834' 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85834 00:17:00.245 [2024-12-06 23:51:11.685151] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.245 23:51:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85834 00:17:00.245 [2024-12-06 23:51:11.701390] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.629 23:51:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:01.629 00:17:01.629 real 0m4.791s 00:17:01.629 user 0m6.796s 00:17:01.629 sys 0m0.911s 00:17:01.629 ************************************ 00:17:01.629 END TEST raid_state_function_test_sb_4k 00:17:01.629 ************************************ 00:17:01.629 23:51:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.629 23:51:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.629 23:51:12 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:01.629 23:51:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:01.629 23:51:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.629 23:51:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.629 ************************************ 00:17:01.629 START TEST raid_superblock_test_4k 00:17:01.629 ************************************ 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86080 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86080 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86080 ']' 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.629 23:51:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.629 [2024-12-06 23:51:12.939290] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:17:01.629 [2024-12-06 23:51:12.939499] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86080 ] 00:17:01.629 [2024-12-06 23:51:13.117884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.890 [2024-12-06 23:51:13.220891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.890 [2024-12-06 23:51:13.413600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.890 [2024-12-06 23:51:13.413638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.460 malloc1 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.460 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.460 [2024-12-06 23:51:13.792960] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.460 [2024-12-06 23:51:13.793073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.460 [2024-12-06 23:51:13.793140] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:02.460 [2024-12-06 23:51:13.793171] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.460 [2024-12-06 23:51:13.795254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.460 [2024-12-06 23:51:13.795326] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.461 pt1 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.461 malloc2 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.461 [2024-12-06 23:51:13.847436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:02.461 [2024-12-06 23:51:13.847490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.461 [2024-12-06 23:51:13.847516] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:02.461 [2024-12-06 23:51:13.847524] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.461 [2024-12-06 23:51:13.849575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.461 [2024-12-06 23:51:13.849610] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:02.461 pt2 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.461 [2024-12-06 23:51:13.859467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.461 [2024-12-06 23:51:13.861228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.461 [2024-12-06 23:51:13.861405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:02.461 [2024-12-06 23:51:13.861421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:02.461 [2024-12-06 23:51:13.861663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:02.461 [2024-12-06 23:51:13.861813] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:02.461 [2024-12-06 23:51:13.861827] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:02.461 [2024-12-06 23:51:13.861950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.461 "name": "raid_bdev1", 00:17:02.461 "uuid": "30db1387-170d-430c-aa5f-768a9c790d12", 00:17:02.461 "strip_size_kb": 0, 00:17:02.461 "state": "online", 00:17:02.461 "raid_level": "raid1", 00:17:02.461 "superblock": true, 00:17:02.461 "num_base_bdevs": 2, 00:17:02.461 "num_base_bdevs_discovered": 2, 00:17:02.461 "num_base_bdevs_operational": 2, 00:17:02.461 "base_bdevs_list": [ 00:17:02.461 { 00:17:02.461 "name": "pt1", 00:17:02.461 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.461 "is_configured": true, 00:17:02.461 "data_offset": 256, 00:17:02.461 "data_size": 7936 00:17:02.461 }, 00:17:02.461 { 00:17:02.461 "name": "pt2", 00:17:02.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.461 "is_configured": true, 00:17:02.461 "data_offset": 256, 00:17:02.461 "data_size": 7936 00:17:02.461 } 00:17:02.461 ] 00:17:02.461 }' 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.461 23:51:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.032 [2024-12-06 23:51:14.326876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.032 "name": "raid_bdev1", 00:17:03.032 "aliases": [ 00:17:03.032 "30db1387-170d-430c-aa5f-768a9c790d12" 00:17:03.032 ], 00:17:03.032 "product_name": "Raid Volume", 00:17:03.032 "block_size": 4096, 00:17:03.032 "num_blocks": 7936, 00:17:03.032 "uuid": "30db1387-170d-430c-aa5f-768a9c790d12", 00:17:03.032 "assigned_rate_limits": { 00:17:03.032 "rw_ios_per_sec": 0, 00:17:03.032 "rw_mbytes_per_sec": 0, 00:17:03.032 "r_mbytes_per_sec": 0, 00:17:03.032 "w_mbytes_per_sec": 0 00:17:03.032 }, 00:17:03.032 "claimed": false, 00:17:03.032 "zoned": false, 00:17:03.032 "supported_io_types": { 00:17:03.032 "read": true, 00:17:03.032 "write": true, 00:17:03.032 "unmap": false, 00:17:03.032 "flush": false, 00:17:03.032 "reset": true, 00:17:03.032 "nvme_admin": false, 00:17:03.032 "nvme_io": false, 00:17:03.032 "nvme_io_md": false, 00:17:03.032 "write_zeroes": true, 00:17:03.032 "zcopy": false, 00:17:03.032 "get_zone_info": false, 00:17:03.032 "zone_management": false, 00:17:03.032 "zone_append": false, 00:17:03.032 "compare": false, 00:17:03.032 "compare_and_write": false, 00:17:03.032 "abort": false, 00:17:03.032 "seek_hole": false, 00:17:03.032 "seek_data": false, 00:17:03.032 "copy": false, 00:17:03.032 "nvme_iov_md": false 00:17:03.032 }, 00:17:03.032 "memory_domains": [ 00:17:03.032 { 00:17:03.032 "dma_device_id": "system", 00:17:03.032 "dma_device_type": 1 00:17:03.032 }, 00:17:03.032 { 00:17:03.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.032 "dma_device_type": 2 00:17:03.032 }, 00:17:03.032 { 00:17:03.032 "dma_device_id": "system", 00:17:03.032 "dma_device_type": 1 00:17:03.032 }, 00:17:03.032 { 00:17:03.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.032 "dma_device_type": 2 00:17:03.032 } 00:17:03.032 ], 00:17:03.032 "driver_specific": { 00:17:03.032 "raid": { 00:17:03.032 "uuid": "30db1387-170d-430c-aa5f-768a9c790d12", 00:17:03.032 "strip_size_kb": 0, 00:17:03.032 "state": "online", 00:17:03.032 "raid_level": "raid1", 00:17:03.032 "superblock": true, 00:17:03.032 "num_base_bdevs": 2, 00:17:03.032 "num_base_bdevs_discovered": 2, 00:17:03.032 "num_base_bdevs_operational": 2, 00:17:03.032 "base_bdevs_list": [ 00:17:03.032 { 00:17:03.032 "name": "pt1", 00:17:03.032 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.032 "is_configured": true, 00:17:03.032 "data_offset": 256, 00:17:03.032 "data_size": 7936 00:17:03.032 }, 00:17:03.032 { 00:17:03.032 "name": "pt2", 00:17:03.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.032 "is_configured": true, 00:17:03.032 "data_offset": 256, 00:17:03.032 "data_size": 7936 00:17:03.032 } 00:17:03.032 ] 00:17:03.032 } 00:17:03.032 } 00:17:03.032 }' 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:03.032 pt2' 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.032 [2024-12-06 23:51:14.562425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.032 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=30db1387-170d-430c-aa5f-768a9c790d12 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 30db1387-170d-430c-aa5f-768a9c790d12 ']' 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.294 [2024-12-06 23:51:14.606105] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:03.294 [2024-12-06 23:51:14.606125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.294 [2024-12-06 23:51:14.606192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.294 [2024-12-06 23:51:14.606241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.294 [2024-12-06 23:51:14.606254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.294 [2024-12-06 23:51:14.745908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:03.294 [2024-12-06 23:51:14.747699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:03.294 [2024-12-06 23:51:14.747795] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:03.294 [2024-12-06 23:51:14.747891] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:03.294 [2024-12-06 23:51:14.747929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:03.294 [2024-12-06 23:51:14.747951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:03.294 request: 00:17:03.294 { 00:17:03.294 "name": "raid_bdev1", 00:17:03.294 "raid_level": "raid1", 00:17:03.294 "base_bdevs": [ 00:17:03.294 "malloc1", 00:17:03.294 "malloc2" 00:17:03.294 ], 00:17:03.294 "superblock": false, 00:17:03.294 "method": "bdev_raid_create", 00:17:03.294 "req_id": 1 00:17:03.294 } 00:17:03.294 Got JSON-RPC error response 00:17:03.294 response: 00:17:03.294 { 00:17:03.294 "code": -17, 00:17:03.294 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:03.294 } 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.294 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.294 [2024-12-06 23:51:14.809768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:03.294 [2024-12-06 23:51:14.809869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.294 [2024-12-06 23:51:14.809889] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:03.294 [2024-12-06 23:51:14.809899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.294 [2024-12-06 23:51:14.811919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.294 [2024-12-06 23:51:14.811979] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:03.294 [2024-12-06 23:51:14.812039] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:03.294 [2024-12-06 23:51:14.812092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:03.294 pt1 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.295 "name": "raid_bdev1", 00:17:03.295 "uuid": "30db1387-170d-430c-aa5f-768a9c790d12", 00:17:03.295 "strip_size_kb": 0, 00:17:03.295 "state": "configuring", 00:17:03.295 "raid_level": "raid1", 00:17:03.295 "superblock": true, 00:17:03.295 "num_base_bdevs": 2, 00:17:03.295 "num_base_bdevs_discovered": 1, 00:17:03.295 "num_base_bdevs_operational": 2, 00:17:03.295 "base_bdevs_list": [ 00:17:03.295 { 00:17:03.295 "name": "pt1", 00:17:03.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.295 "is_configured": true, 00:17:03.295 "data_offset": 256, 00:17:03.295 "data_size": 7936 00:17:03.295 }, 00:17:03.295 { 00:17:03.295 "name": null, 00:17:03.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.295 "is_configured": false, 00:17:03.295 "data_offset": 256, 00:17:03.295 "data_size": 7936 00:17:03.295 } 00:17:03.295 ] 00:17:03.295 }' 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.295 23:51:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.865 [2024-12-06 23:51:15.280992] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.865 [2024-12-06 23:51:15.281085] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.865 [2024-12-06 23:51:15.281121] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:03.865 [2024-12-06 23:51:15.281150] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.865 [2024-12-06 23:51:15.281524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.865 [2024-12-06 23:51:15.281585] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.865 [2024-12-06 23:51:15.281678] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:03.865 [2024-12-06 23:51:15.281730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.865 [2024-12-06 23:51:15.281883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:03.865 [2024-12-06 23:51:15.281923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:03.865 [2024-12-06 23:51:15.282162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:03.865 [2024-12-06 23:51:15.282347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:03.865 [2024-12-06 23:51:15.282385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:03.865 [2024-12-06 23:51:15.282557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.865 pt2 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.865 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.865 "name": "raid_bdev1", 00:17:03.865 "uuid": "30db1387-170d-430c-aa5f-768a9c790d12", 00:17:03.865 "strip_size_kb": 0, 00:17:03.865 "state": "online", 00:17:03.865 "raid_level": "raid1", 00:17:03.865 "superblock": true, 00:17:03.865 "num_base_bdevs": 2, 00:17:03.865 "num_base_bdevs_discovered": 2, 00:17:03.865 "num_base_bdevs_operational": 2, 00:17:03.865 "base_bdevs_list": [ 00:17:03.865 { 00:17:03.865 "name": "pt1", 00:17:03.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.866 "is_configured": true, 00:17:03.866 "data_offset": 256, 00:17:03.866 "data_size": 7936 00:17:03.866 }, 00:17:03.866 { 00:17:03.866 "name": "pt2", 00:17:03.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.866 "is_configured": true, 00:17:03.866 "data_offset": 256, 00:17:03.866 "data_size": 7936 00:17:03.866 } 00:17:03.866 ] 00:17:03.866 }' 00:17:03.866 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.866 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.434 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:04.434 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:04.434 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:04.434 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:04.434 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:04.434 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:04.434 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:04.434 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.434 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.434 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:04.435 [2024-12-06 23:51:15.752369] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:04.435 "name": "raid_bdev1", 00:17:04.435 "aliases": [ 00:17:04.435 "30db1387-170d-430c-aa5f-768a9c790d12" 00:17:04.435 ], 00:17:04.435 "product_name": "Raid Volume", 00:17:04.435 "block_size": 4096, 00:17:04.435 "num_blocks": 7936, 00:17:04.435 "uuid": "30db1387-170d-430c-aa5f-768a9c790d12", 00:17:04.435 "assigned_rate_limits": { 00:17:04.435 "rw_ios_per_sec": 0, 00:17:04.435 "rw_mbytes_per_sec": 0, 00:17:04.435 "r_mbytes_per_sec": 0, 00:17:04.435 "w_mbytes_per_sec": 0 00:17:04.435 }, 00:17:04.435 "claimed": false, 00:17:04.435 "zoned": false, 00:17:04.435 "supported_io_types": { 00:17:04.435 "read": true, 00:17:04.435 "write": true, 00:17:04.435 "unmap": false, 00:17:04.435 "flush": false, 00:17:04.435 "reset": true, 00:17:04.435 "nvme_admin": false, 00:17:04.435 "nvme_io": false, 00:17:04.435 "nvme_io_md": false, 00:17:04.435 "write_zeroes": true, 00:17:04.435 "zcopy": false, 00:17:04.435 "get_zone_info": false, 00:17:04.435 "zone_management": false, 00:17:04.435 "zone_append": false, 00:17:04.435 "compare": false, 00:17:04.435 "compare_and_write": false, 00:17:04.435 "abort": false, 00:17:04.435 "seek_hole": false, 00:17:04.435 "seek_data": false, 00:17:04.435 "copy": false, 00:17:04.435 "nvme_iov_md": false 00:17:04.435 }, 00:17:04.435 "memory_domains": [ 00:17:04.435 { 00:17:04.435 "dma_device_id": "system", 00:17:04.435 "dma_device_type": 1 00:17:04.435 }, 00:17:04.435 { 00:17:04.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.435 "dma_device_type": 2 00:17:04.435 }, 00:17:04.435 { 00:17:04.435 "dma_device_id": "system", 00:17:04.435 "dma_device_type": 1 00:17:04.435 }, 00:17:04.435 { 00:17:04.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.435 "dma_device_type": 2 00:17:04.435 } 00:17:04.435 ], 00:17:04.435 "driver_specific": { 00:17:04.435 "raid": { 00:17:04.435 "uuid": "30db1387-170d-430c-aa5f-768a9c790d12", 00:17:04.435 "strip_size_kb": 0, 00:17:04.435 "state": "online", 00:17:04.435 "raid_level": "raid1", 00:17:04.435 "superblock": true, 00:17:04.435 "num_base_bdevs": 2, 00:17:04.435 "num_base_bdevs_discovered": 2, 00:17:04.435 "num_base_bdevs_operational": 2, 00:17:04.435 "base_bdevs_list": [ 00:17:04.435 { 00:17:04.435 "name": "pt1", 00:17:04.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.435 "is_configured": true, 00:17:04.435 "data_offset": 256, 00:17:04.435 "data_size": 7936 00:17:04.435 }, 00:17:04.435 { 00:17:04.435 "name": "pt2", 00:17:04.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.435 "is_configured": true, 00:17:04.435 "data_offset": 256, 00:17:04.435 "data_size": 7936 00:17:04.435 } 00:17:04.435 ] 00:17:04.435 } 00:17:04.435 } 00:17:04.435 }' 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:04.435 pt2' 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.435 23:51:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:04.435 [2024-12-06 23:51:15.984221] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 30db1387-170d-430c-aa5f-768a9c790d12 '!=' 30db1387-170d-430c-aa5f-768a9c790d12 ']' 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.694 [2024-12-06 23:51:16.035955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.694 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.694 "name": "raid_bdev1", 00:17:04.694 "uuid": "30db1387-170d-430c-aa5f-768a9c790d12", 00:17:04.694 "strip_size_kb": 0, 00:17:04.694 "state": "online", 00:17:04.694 "raid_level": "raid1", 00:17:04.694 "superblock": true, 00:17:04.694 "num_base_bdevs": 2, 00:17:04.694 "num_base_bdevs_discovered": 1, 00:17:04.694 "num_base_bdevs_operational": 1, 00:17:04.694 "base_bdevs_list": [ 00:17:04.694 { 00:17:04.694 "name": null, 00:17:04.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.694 "is_configured": false, 00:17:04.694 "data_offset": 0, 00:17:04.694 "data_size": 7936 00:17:04.695 }, 00:17:04.695 { 00:17:04.695 "name": "pt2", 00:17:04.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.695 "is_configured": true, 00:17:04.695 "data_offset": 256, 00:17:04.695 "data_size": 7936 00:17:04.695 } 00:17:04.695 ] 00:17:04.695 }' 00:17:04.695 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.695 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.953 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.953 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.953 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.953 [2024-12-06 23:51:16.467199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.953 [2024-12-06 23:51:16.467221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.953 [2024-12-06 23:51:16.467273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.953 [2024-12-06 23:51:16.467308] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.953 [2024-12-06 23:51:16.467318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:04.953 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.953 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.953 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:04.953 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.953 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.953 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.212 [2024-12-06 23:51:16.543080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.212 [2024-12-06 23:51:16.543123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.212 [2024-12-06 23:51:16.543136] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:05.212 [2024-12-06 23:51:16.543145] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.212 [2024-12-06 23:51:16.545217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.212 [2024-12-06 23:51:16.545304] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.212 [2024-12-06 23:51:16.545370] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:05.212 [2024-12-06 23:51:16.545411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.212 [2024-12-06 23:51:16.545496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:05.212 [2024-12-06 23:51:16.545508] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:05.212 [2024-12-06 23:51:16.545737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:05.212 [2024-12-06 23:51:16.545878] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:05.212 [2024-12-06 23:51:16.545888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:05.212 [2024-12-06 23:51:16.546016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.212 pt2 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.212 "name": "raid_bdev1", 00:17:05.212 "uuid": "30db1387-170d-430c-aa5f-768a9c790d12", 00:17:05.212 "strip_size_kb": 0, 00:17:05.212 "state": "online", 00:17:05.212 "raid_level": "raid1", 00:17:05.212 "superblock": true, 00:17:05.212 "num_base_bdevs": 2, 00:17:05.212 "num_base_bdevs_discovered": 1, 00:17:05.212 "num_base_bdevs_operational": 1, 00:17:05.212 "base_bdevs_list": [ 00:17:05.212 { 00:17:05.212 "name": null, 00:17:05.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.212 "is_configured": false, 00:17:05.212 "data_offset": 256, 00:17:05.212 "data_size": 7936 00:17:05.212 }, 00:17:05.212 { 00:17:05.212 "name": "pt2", 00:17:05.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.212 "is_configured": true, 00:17:05.212 "data_offset": 256, 00:17:05.212 "data_size": 7936 00:17:05.212 } 00:17:05.212 ] 00:17:05.212 }' 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.212 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.471 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:05.471 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.471 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.471 [2024-12-06 23:51:16.986262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.471 [2024-12-06 23:51:16.986328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.471 [2024-12-06 23:51:16.986409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.471 [2024-12-06 23:51:16.986461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.471 [2024-12-06 23:51:16.986522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:05.471 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.471 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.471 23:51:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:05.471 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.472 23:51:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.472 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.731 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:05.731 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:05.731 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:05.731 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:05.731 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.731 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.731 [2024-12-06 23:51:17.046183] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:05.731 [2024-12-06 23:51:17.046282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.731 [2024-12-06 23:51:17.046303] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:05.731 [2024-12-06 23:51:17.046311] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.731 [2024-12-06 23:51:17.048371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.731 [2024-12-06 23:51:17.048452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:05.731 [2024-12-06 23:51:17.048519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:05.731 [2024-12-06 23:51:17.048563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:05.731 [2024-12-06 23:51:17.048716] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:05.731 [2024-12-06 23:51:17.048727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.731 [2024-12-06 23:51:17.048741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:05.731 [2024-12-06 23:51:17.048795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.731 [2024-12-06 23:51:17.048861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:05.731 [2024-12-06 23:51:17.048869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:05.731 [2024-12-06 23:51:17.049091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:05.731 [2024-12-06 23:51:17.049225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:05.731 [2024-12-06 23:51:17.049237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:05.731 [2024-12-06 23:51:17.049381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.731 pt1 00:17:05.731 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.732 "name": "raid_bdev1", 00:17:05.732 "uuid": "30db1387-170d-430c-aa5f-768a9c790d12", 00:17:05.732 "strip_size_kb": 0, 00:17:05.732 "state": "online", 00:17:05.732 "raid_level": "raid1", 00:17:05.732 "superblock": true, 00:17:05.732 "num_base_bdevs": 2, 00:17:05.732 "num_base_bdevs_discovered": 1, 00:17:05.732 "num_base_bdevs_operational": 1, 00:17:05.732 "base_bdevs_list": [ 00:17:05.732 { 00:17:05.732 "name": null, 00:17:05.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.732 "is_configured": false, 00:17:05.732 "data_offset": 256, 00:17:05.732 "data_size": 7936 00:17:05.732 }, 00:17:05.732 { 00:17:05.732 "name": "pt2", 00:17:05.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.732 "is_configured": true, 00:17:05.732 "data_offset": 256, 00:17:05.732 "data_size": 7936 00:17:05.732 } 00:17:05.732 ] 00:17:05.732 }' 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.732 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.992 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:05.992 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:05.992 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.992 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.992 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.992 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:05.992 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.992 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:05.992 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.992 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.992 [2024-12-06 23:51:17.533591] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.992 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.252 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 30db1387-170d-430c-aa5f-768a9c790d12 '!=' 30db1387-170d-430c-aa5f-768a9c790d12 ']' 00:17:06.252 23:51:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86080 00:17:06.252 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86080 ']' 00:17:06.252 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86080 00:17:06.252 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:06.252 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.252 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86080 00:17:06.252 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:06.252 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:06.252 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86080' 00:17:06.252 killing process with pid 86080 00:17:06.252 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86080 00:17:06.252 [2024-12-06 23:51:17.602143] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:06.252 [2024-12-06 23:51:17.602209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.252 [2024-12-06 23:51:17.602246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.252 [2024-12-06 23:51:17.602259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:06.252 23:51:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86080 00:17:06.252 [2024-12-06 23:51:17.793886] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:07.635 23:51:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:07.635 00:17:07.635 real 0m6.023s 00:17:07.635 user 0m9.154s 00:17:07.635 sys 0m1.137s 00:17:07.635 ************************************ 00:17:07.635 END TEST raid_superblock_test_4k 00:17:07.635 ************************************ 00:17:07.635 23:51:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.635 23:51:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.635 23:51:18 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:07.635 23:51:18 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:07.635 23:51:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:07.635 23:51:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.635 23:51:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:07.635 ************************************ 00:17:07.635 START TEST raid_rebuild_test_sb_4k 00:17:07.635 ************************************ 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:07.635 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:07.636 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:07.636 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:07.636 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86403 00:17:07.636 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:07.636 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86403 00:17:07.636 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86403 ']' 00:17:07.636 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.636 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.636 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.636 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.636 23:51:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.636 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:07.636 Zero copy mechanism will not be used. 00:17:07.636 [2024-12-06 23:51:19.049915] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:17:07.636 [2024-12-06 23:51:19.050038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86403 ] 00:17:07.896 [2024-12-06 23:51:19.223303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.896 [2024-12-06 23:51:19.326502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.156 [2024-12-06 23:51:19.510083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:08.156 [2024-12-06 23:51:19.510113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.416 BaseBdev1_malloc 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.416 [2024-12-06 23:51:19.903401] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:08.416 [2024-12-06 23:51:19.903458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.416 [2024-12-06 23:51:19.903480] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:08.416 [2024-12-06 23:51:19.903491] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.416 [2024-12-06 23:51:19.905528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.416 [2024-12-06 23:51:19.905578] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:08.416 BaseBdev1 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.416 BaseBdev2_malloc 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.416 [2024-12-06 23:51:19.953194] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:08.416 [2024-12-06 23:51:19.953314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.416 [2024-12-06 23:51:19.953342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:08.416 [2024-12-06 23:51:19.953353] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.416 [2024-12-06 23:51:19.955423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.416 [2024-12-06 23:51:19.955461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:08.416 BaseBdev2 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.416 23:51:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.676 spare_malloc 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.676 spare_delay 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.676 [2024-12-06 23:51:20.050228] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:08.676 [2024-12-06 23:51:20.050354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.676 [2024-12-06 23:51:20.050377] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:08.676 [2024-12-06 23:51:20.050387] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.676 [2024-12-06 23:51:20.052538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.676 [2024-12-06 23:51:20.052583] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:08.676 spare 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.676 [2024-12-06 23:51:20.062261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.676 [2024-12-06 23:51:20.063962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.676 [2024-12-06 23:51:20.064169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:08.676 [2024-12-06 23:51:20.064185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:08.676 [2024-12-06 23:51:20.064423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:08.676 [2024-12-06 23:51:20.064607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:08.676 [2024-12-06 23:51:20.064616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:08.676 [2024-12-06 23:51:20.064769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.676 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.676 "name": "raid_bdev1", 00:17:08.676 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:08.676 "strip_size_kb": 0, 00:17:08.676 "state": "online", 00:17:08.676 "raid_level": "raid1", 00:17:08.677 "superblock": true, 00:17:08.677 "num_base_bdevs": 2, 00:17:08.677 "num_base_bdevs_discovered": 2, 00:17:08.677 "num_base_bdevs_operational": 2, 00:17:08.677 "base_bdevs_list": [ 00:17:08.677 { 00:17:08.677 "name": "BaseBdev1", 00:17:08.677 "uuid": "ce3e54fc-0878-5435-97e6-522ea244e14f", 00:17:08.677 "is_configured": true, 00:17:08.677 "data_offset": 256, 00:17:08.677 "data_size": 7936 00:17:08.677 }, 00:17:08.677 { 00:17:08.677 "name": "BaseBdev2", 00:17:08.677 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:08.677 "is_configured": true, 00:17:08.677 "data_offset": 256, 00:17:08.677 "data_size": 7936 00:17:08.677 } 00:17:08.677 ] 00:17:08.677 }' 00:17:08.677 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.677 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.247 [2024-12-06 23:51:20.557624] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:09.247 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:09.248 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:09.248 [2024-12-06 23:51:20.808975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:09.508 /dev/nbd0 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:09.508 1+0 records in 00:17:09.508 1+0 records out 00:17:09.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606406 s, 6.8 MB/s 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:09.508 23:51:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:10.078 7936+0 records in 00:17:10.078 7936+0 records out 00:17:10.078 32505856 bytes (33 MB, 31 MiB) copied, 0.616498 s, 52.7 MB/s 00:17:10.078 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:10.078 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.078 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:10.078 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:10.078 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:10.078 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:10.078 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:10.339 [2024-12-06 23:51:21.717297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.339 [2024-12-06 23:51:21.733375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.339 "name": "raid_bdev1", 00:17:10.339 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:10.339 "strip_size_kb": 0, 00:17:10.339 "state": "online", 00:17:10.339 "raid_level": "raid1", 00:17:10.339 "superblock": true, 00:17:10.339 "num_base_bdevs": 2, 00:17:10.339 "num_base_bdevs_discovered": 1, 00:17:10.339 "num_base_bdevs_operational": 1, 00:17:10.339 "base_bdevs_list": [ 00:17:10.339 { 00:17:10.339 "name": null, 00:17:10.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.339 "is_configured": false, 00:17:10.339 "data_offset": 0, 00:17:10.339 "data_size": 7936 00:17:10.339 }, 00:17:10.339 { 00:17:10.339 "name": "BaseBdev2", 00:17:10.339 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:10.339 "is_configured": true, 00:17:10.339 "data_offset": 256, 00:17:10.339 "data_size": 7936 00:17:10.339 } 00:17:10.339 ] 00:17:10.339 }' 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.339 23:51:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.599 23:51:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:10.599 23:51:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.599 23:51:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.599 [2024-12-06 23:51:22.112715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:10.599 [2024-12-06 23:51:22.130451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:10.599 23:51:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.599 23:51:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:10.599 [2024-12-06 23:51:22.132710] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:11.979 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.979 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.979 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.979 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.979 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.979 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.979 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.979 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.979 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.979 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.979 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.979 "name": "raid_bdev1", 00:17:11.979 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:11.979 "strip_size_kb": 0, 00:17:11.979 "state": "online", 00:17:11.979 "raid_level": "raid1", 00:17:11.979 "superblock": true, 00:17:11.979 "num_base_bdevs": 2, 00:17:11.979 "num_base_bdevs_discovered": 2, 00:17:11.979 "num_base_bdevs_operational": 2, 00:17:11.979 "process": { 00:17:11.980 "type": "rebuild", 00:17:11.980 "target": "spare", 00:17:11.980 "progress": { 00:17:11.980 "blocks": 2560, 00:17:11.980 "percent": 32 00:17:11.980 } 00:17:11.980 }, 00:17:11.980 "base_bdevs_list": [ 00:17:11.980 { 00:17:11.980 "name": "spare", 00:17:11.980 "uuid": "23365b98-1a53-5d5f-a55a-2927553e08d4", 00:17:11.980 "is_configured": true, 00:17:11.980 "data_offset": 256, 00:17:11.980 "data_size": 7936 00:17:11.980 }, 00:17:11.980 { 00:17:11.980 "name": "BaseBdev2", 00:17:11.980 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:11.980 "is_configured": true, 00:17:11.980 "data_offset": 256, 00:17:11.980 "data_size": 7936 00:17:11.980 } 00:17:11.980 ] 00:17:11.980 }' 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.980 [2024-12-06 23:51:23.300739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:11.980 [2024-12-06 23:51:23.341197] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:11.980 [2024-12-06 23:51:23.341310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.980 [2024-12-06 23:51:23.341327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:11.980 [2024-12-06 23:51:23.341336] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.980 "name": "raid_bdev1", 00:17:11.980 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:11.980 "strip_size_kb": 0, 00:17:11.980 "state": "online", 00:17:11.980 "raid_level": "raid1", 00:17:11.980 "superblock": true, 00:17:11.980 "num_base_bdevs": 2, 00:17:11.980 "num_base_bdevs_discovered": 1, 00:17:11.980 "num_base_bdevs_operational": 1, 00:17:11.980 "base_bdevs_list": [ 00:17:11.980 { 00:17:11.980 "name": null, 00:17:11.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.980 "is_configured": false, 00:17:11.980 "data_offset": 0, 00:17:11.980 "data_size": 7936 00:17:11.980 }, 00:17:11.980 { 00:17:11.980 "name": "BaseBdev2", 00:17:11.980 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:11.980 "is_configured": true, 00:17:11.980 "data_offset": 256, 00:17:11.980 "data_size": 7936 00:17:11.980 } 00:17:11.980 ] 00:17:11.980 }' 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.980 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.550 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.550 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.550 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.550 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.550 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.550 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.550 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.550 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.550 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.550 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.550 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.550 "name": "raid_bdev1", 00:17:12.550 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:12.550 "strip_size_kb": 0, 00:17:12.550 "state": "online", 00:17:12.550 "raid_level": "raid1", 00:17:12.550 "superblock": true, 00:17:12.550 "num_base_bdevs": 2, 00:17:12.550 "num_base_bdevs_discovered": 1, 00:17:12.550 "num_base_bdevs_operational": 1, 00:17:12.551 "base_bdevs_list": [ 00:17:12.551 { 00:17:12.551 "name": null, 00:17:12.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.551 "is_configured": false, 00:17:12.551 "data_offset": 0, 00:17:12.551 "data_size": 7936 00:17:12.551 }, 00:17:12.551 { 00:17:12.551 "name": "BaseBdev2", 00:17:12.551 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:12.551 "is_configured": true, 00:17:12.551 "data_offset": 256, 00:17:12.551 "data_size": 7936 00:17:12.551 } 00:17:12.551 ] 00:17:12.551 }' 00:17:12.551 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.551 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.551 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.551 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.551 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:12.551 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.551 23:51:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.551 [2024-12-06 23:51:23.994381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.551 [2024-12-06 23:51:24.009080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:12.551 23:51:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.551 23:51:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:12.551 [2024-12-06 23:51:24.010994] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:13.491 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.491 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.491 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.491 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.491 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.491 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.491 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.491 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.491 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.491 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.751 "name": "raid_bdev1", 00:17:13.751 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:13.751 "strip_size_kb": 0, 00:17:13.751 "state": "online", 00:17:13.751 "raid_level": "raid1", 00:17:13.751 "superblock": true, 00:17:13.751 "num_base_bdevs": 2, 00:17:13.751 "num_base_bdevs_discovered": 2, 00:17:13.751 "num_base_bdevs_operational": 2, 00:17:13.751 "process": { 00:17:13.751 "type": "rebuild", 00:17:13.751 "target": "spare", 00:17:13.751 "progress": { 00:17:13.751 "blocks": 2560, 00:17:13.751 "percent": 32 00:17:13.751 } 00:17:13.751 }, 00:17:13.751 "base_bdevs_list": [ 00:17:13.751 { 00:17:13.751 "name": "spare", 00:17:13.751 "uuid": "23365b98-1a53-5d5f-a55a-2927553e08d4", 00:17:13.751 "is_configured": true, 00:17:13.751 "data_offset": 256, 00:17:13.751 "data_size": 7936 00:17:13.751 }, 00:17:13.751 { 00:17:13.751 "name": "BaseBdev2", 00:17:13.751 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:13.751 "is_configured": true, 00:17:13.751 "data_offset": 256, 00:17:13.751 "data_size": 7936 00:17:13.751 } 00:17:13.751 ] 00:17:13.751 }' 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:13.751 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=674 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.751 "name": "raid_bdev1", 00:17:13.751 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:13.751 "strip_size_kb": 0, 00:17:13.751 "state": "online", 00:17:13.751 "raid_level": "raid1", 00:17:13.751 "superblock": true, 00:17:13.751 "num_base_bdevs": 2, 00:17:13.751 "num_base_bdevs_discovered": 2, 00:17:13.751 "num_base_bdevs_operational": 2, 00:17:13.751 "process": { 00:17:13.751 "type": "rebuild", 00:17:13.751 "target": "spare", 00:17:13.751 "progress": { 00:17:13.751 "blocks": 2816, 00:17:13.751 "percent": 35 00:17:13.751 } 00:17:13.751 }, 00:17:13.751 "base_bdevs_list": [ 00:17:13.751 { 00:17:13.751 "name": "spare", 00:17:13.751 "uuid": "23365b98-1a53-5d5f-a55a-2927553e08d4", 00:17:13.751 "is_configured": true, 00:17:13.751 "data_offset": 256, 00:17:13.751 "data_size": 7936 00:17:13.751 }, 00:17:13.751 { 00:17:13.751 "name": "BaseBdev2", 00:17:13.751 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:13.751 "is_configured": true, 00:17:13.751 "data_offset": 256, 00:17:13.751 "data_size": 7936 00:17:13.751 } 00:17:13.751 ] 00:17:13.751 }' 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.751 23:51:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.133 "name": "raid_bdev1", 00:17:15.133 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:15.133 "strip_size_kb": 0, 00:17:15.133 "state": "online", 00:17:15.133 "raid_level": "raid1", 00:17:15.133 "superblock": true, 00:17:15.133 "num_base_bdevs": 2, 00:17:15.133 "num_base_bdevs_discovered": 2, 00:17:15.133 "num_base_bdevs_operational": 2, 00:17:15.133 "process": { 00:17:15.133 "type": "rebuild", 00:17:15.133 "target": "spare", 00:17:15.133 "progress": { 00:17:15.133 "blocks": 5632, 00:17:15.133 "percent": 70 00:17:15.133 } 00:17:15.133 }, 00:17:15.133 "base_bdevs_list": [ 00:17:15.133 { 00:17:15.133 "name": "spare", 00:17:15.133 "uuid": "23365b98-1a53-5d5f-a55a-2927553e08d4", 00:17:15.133 "is_configured": true, 00:17:15.133 "data_offset": 256, 00:17:15.133 "data_size": 7936 00:17:15.133 }, 00:17:15.133 { 00:17:15.133 "name": "BaseBdev2", 00:17:15.133 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:15.133 "is_configured": true, 00:17:15.133 "data_offset": 256, 00:17:15.133 "data_size": 7936 00:17:15.133 } 00:17:15.133 ] 00:17:15.133 }' 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.133 23:51:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.703 [2024-12-06 23:51:27.123569] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:15.703 [2024-12-06 23:51:27.123720] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:15.703 [2024-12-06 23:51:27.123820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.969 "name": "raid_bdev1", 00:17:15.969 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:15.969 "strip_size_kb": 0, 00:17:15.969 "state": "online", 00:17:15.969 "raid_level": "raid1", 00:17:15.969 "superblock": true, 00:17:15.969 "num_base_bdevs": 2, 00:17:15.969 "num_base_bdevs_discovered": 2, 00:17:15.969 "num_base_bdevs_operational": 2, 00:17:15.969 "base_bdevs_list": [ 00:17:15.969 { 00:17:15.969 "name": "spare", 00:17:15.969 "uuid": "23365b98-1a53-5d5f-a55a-2927553e08d4", 00:17:15.969 "is_configured": true, 00:17:15.969 "data_offset": 256, 00:17:15.969 "data_size": 7936 00:17:15.969 }, 00:17:15.969 { 00:17:15.969 "name": "BaseBdev2", 00:17:15.969 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:15.969 "is_configured": true, 00:17:15.969 "data_offset": 256, 00:17:15.969 "data_size": 7936 00:17:15.969 } 00:17:15.969 ] 00:17:15.969 }' 00:17:15.969 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.230 "name": "raid_bdev1", 00:17:16.230 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:16.230 "strip_size_kb": 0, 00:17:16.230 "state": "online", 00:17:16.230 "raid_level": "raid1", 00:17:16.230 "superblock": true, 00:17:16.230 "num_base_bdevs": 2, 00:17:16.230 "num_base_bdevs_discovered": 2, 00:17:16.230 "num_base_bdevs_operational": 2, 00:17:16.230 "base_bdevs_list": [ 00:17:16.230 { 00:17:16.230 "name": "spare", 00:17:16.230 "uuid": "23365b98-1a53-5d5f-a55a-2927553e08d4", 00:17:16.230 "is_configured": true, 00:17:16.230 "data_offset": 256, 00:17:16.230 "data_size": 7936 00:17:16.230 }, 00:17:16.230 { 00:17:16.230 "name": "BaseBdev2", 00:17:16.230 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:16.230 "is_configured": true, 00:17:16.230 "data_offset": 256, 00:17:16.230 "data_size": 7936 00:17:16.230 } 00:17:16.230 ] 00:17:16.230 }' 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.230 "name": "raid_bdev1", 00:17:16.230 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:16.230 "strip_size_kb": 0, 00:17:16.230 "state": "online", 00:17:16.230 "raid_level": "raid1", 00:17:16.230 "superblock": true, 00:17:16.230 "num_base_bdevs": 2, 00:17:16.230 "num_base_bdevs_discovered": 2, 00:17:16.230 "num_base_bdevs_operational": 2, 00:17:16.230 "base_bdevs_list": [ 00:17:16.230 { 00:17:16.230 "name": "spare", 00:17:16.230 "uuid": "23365b98-1a53-5d5f-a55a-2927553e08d4", 00:17:16.230 "is_configured": true, 00:17:16.230 "data_offset": 256, 00:17:16.230 "data_size": 7936 00:17:16.230 }, 00:17:16.230 { 00:17:16.230 "name": "BaseBdev2", 00:17:16.230 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:16.230 "is_configured": true, 00:17:16.230 "data_offset": 256, 00:17:16.230 "data_size": 7936 00:17:16.230 } 00:17:16.230 ] 00:17:16.230 }' 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.230 23:51:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.811 [2024-12-06 23:51:28.182352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:16.811 [2024-12-06 23:51:28.182426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.811 [2024-12-06 23:51:28.182527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.811 [2024-12-06 23:51:28.182602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.811 [2024-12-06 23:51:28.182674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.811 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:16.812 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:16.812 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:16.812 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:16.812 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:16.812 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:16.812 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:16.812 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:17.072 /dev/nbd0 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.072 1+0 records in 00:17:17.072 1+0 records out 00:17:17.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522847 s, 7.8 MB/s 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.072 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:17.331 /dev/nbd1 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.331 1+0 records in 00:17:17.331 1+0 records out 00:17:17.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469992 s, 8.7 MB/s 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.331 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:17.591 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:17.591 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.591 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:17.591 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:17.591 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:17.591 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.591 23:51:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:17.591 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:17.591 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:17.591 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:17.591 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.591 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.591 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:17.591 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:17.591 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.591 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.591 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.852 [2024-12-06 23:51:29.367865] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:17.852 [2024-12-06 23:51:29.367917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.852 [2024-12-06 23:51:29.367943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:17.852 [2024-12-06 23:51:29.367952] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.852 [2024-12-06 23:51:29.370329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.852 [2024-12-06 23:51:29.370402] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:17.852 [2024-12-06 23:51:29.370529] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:17.852 [2024-12-06 23:51:29.370601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.852 [2024-12-06 23:51:29.370830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:17.852 spare 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.852 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.111 [2024-12-06 23:51:29.470774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:18.111 [2024-12-06 23:51:29.470802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:18.111 [2024-12-06 23:51:29.471055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:18.111 [2024-12-06 23:51:29.471221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:18.111 [2024-12-06 23:51:29.471235] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:18.111 [2024-12-06 23:51:29.471404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.111 "name": "raid_bdev1", 00:17:18.111 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:18.111 "strip_size_kb": 0, 00:17:18.111 "state": "online", 00:17:18.111 "raid_level": "raid1", 00:17:18.111 "superblock": true, 00:17:18.111 "num_base_bdevs": 2, 00:17:18.111 "num_base_bdevs_discovered": 2, 00:17:18.111 "num_base_bdevs_operational": 2, 00:17:18.111 "base_bdevs_list": [ 00:17:18.111 { 00:17:18.111 "name": "spare", 00:17:18.111 "uuid": "23365b98-1a53-5d5f-a55a-2927553e08d4", 00:17:18.111 "is_configured": true, 00:17:18.111 "data_offset": 256, 00:17:18.111 "data_size": 7936 00:17:18.111 }, 00:17:18.111 { 00:17:18.111 "name": "BaseBdev2", 00:17:18.111 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:18.111 "is_configured": true, 00:17:18.111 "data_offset": 256, 00:17:18.111 "data_size": 7936 00:17:18.111 } 00:17:18.111 ] 00:17:18.111 }' 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.111 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.370 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.370 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.370 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.370 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.370 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.675 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.675 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.675 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.675 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.675 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.675 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.675 "name": "raid_bdev1", 00:17:18.675 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:18.675 "strip_size_kb": 0, 00:17:18.675 "state": "online", 00:17:18.676 "raid_level": "raid1", 00:17:18.676 "superblock": true, 00:17:18.676 "num_base_bdevs": 2, 00:17:18.676 "num_base_bdevs_discovered": 2, 00:17:18.676 "num_base_bdevs_operational": 2, 00:17:18.676 "base_bdevs_list": [ 00:17:18.676 { 00:17:18.676 "name": "spare", 00:17:18.676 "uuid": "23365b98-1a53-5d5f-a55a-2927553e08d4", 00:17:18.676 "is_configured": true, 00:17:18.676 "data_offset": 256, 00:17:18.676 "data_size": 7936 00:17:18.676 }, 00:17:18.676 { 00:17:18.676 "name": "BaseBdev2", 00:17:18.676 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:18.676 "is_configured": true, 00:17:18.676 "data_offset": 256, 00:17:18.676 "data_size": 7936 00:17:18.676 } 00:17:18.676 ] 00:17:18.676 }' 00:17:18.676 23:51:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.676 [2024-12-06 23:51:30.122589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.676 "name": "raid_bdev1", 00:17:18.676 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:18.676 "strip_size_kb": 0, 00:17:18.676 "state": "online", 00:17:18.676 "raid_level": "raid1", 00:17:18.676 "superblock": true, 00:17:18.676 "num_base_bdevs": 2, 00:17:18.676 "num_base_bdevs_discovered": 1, 00:17:18.676 "num_base_bdevs_operational": 1, 00:17:18.676 "base_bdevs_list": [ 00:17:18.676 { 00:17:18.676 "name": null, 00:17:18.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.676 "is_configured": false, 00:17:18.676 "data_offset": 0, 00:17:18.676 "data_size": 7936 00:17:18.676 }, 00:17:18.676 { 00:17:18.676 "name": "BaseBdev2", 00:17:18.676 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:18.676 "is_configured": true, 00:17:18.676 "data_offset": 256, 00:17:18.676 "data_size": 7936 00:17:18.676 } 00:17:18.676 ] 00:17:18.676 }' 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.676 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.245 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:19.245 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.245 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.245 [2024-12-06 23:51:30.589785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.245 [2024-12-06 23:51:30.589971] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.245 [2024-12-06 23:51:30.590049] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:19.245 [2024-12-06 23:51:30.590104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.245 [2024-12-06 23:51:30.605614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:19.245 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.245 23:51:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:19.245 [2024-12-06 23:51:30.607458] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.186 "name": "raid_bdev1", 00:17:20.186 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:20.186 "strip_size_kb": 0, 00:17:20.186 "state": "online", 00:17:20.186 "raid_level": "raid1", 00:17:20.186 "superblock": true, 00:17:20.186 "num_base_bdevs": 2, 00:17:20.186 "num_base_bdevs_discovered": 2, 00:17:20.186 "num_base_bdevs_operational": 2, 00:17:20.186 "process": { 00:17:20.186 "type": "rebuild", 00:17:20.186 "target": "spare", 00:17:20.186 "progress": { 00:17:20.186 "blocks": 2560, 00:17:20.186 "percent": 32 00:17:20.186 } 00:17:20.186 }, 00:17:20.186 "base_bdevs_list": [ 00:17:20.186 { 00:17:20.186 "name": "spare", 00:17:20.186 "uuid": "23365b98-1a53-5d5f-a55a-2927553e08d4", 00:17:20.186 "is_configured": true, 00:17:20.186 "data_offset": 256, 00:17:20.186 "data_size": 7936 00:17:20.186 }, 00:17:20.186 { 00:17:20.186 "name": "BaseBdev2", 00:17:20.186 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:20.186 "is_configured": true, 00:17:20.186 "data_offset": 256, 00:17:20.186 "data_size": 7936 00:17:20.186 } 00:17:20.186 ] 00:17:20.186 }' 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.186 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.476 [2024-12-06 23:51:31.767296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.476 [2024-12-06 23:51:31.812209] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:20.476 [2024-12-06 23:51:31.812324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.476 [2024-12-06 23:51:31.812359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.476 [2024-12-06 23:51:31.812381] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.476 "name": "raid_bdev1", 00:17:20.476 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:20.476 "strip_size_kb": 0, 00:17:20.476 "state": "online", 00:17:20.476 "raid_level": "raid1", 00:17:20.476 "superblock": true, 00:17:20.476 "num_base_bdevs": 2, 00:17:20.476 "num_base_bdevs_discovered": 1, 00:17:20.476 "num_base_bdevs_operational": 1, 00:17:20.476 "base_bdevs_list": [ 00:17:20.476 { 00:17:20.476 "name": null, 00:17:20.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.476 "is_configured": false, 00:17:20.476 "data_offset": 0, 00:17:20.476 "data_size": 7936 00:17:20.476 }, 00:17:20.476 { 00:17:20.476 "name": "BaseBdev2", 00:17:20.476 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:20.476 "is_configured": true, 00:17:20.476 "data_offset": 256, 00:17:20.476 "data_size": 7936 00:17:20.476 } 00:17:20.476 ] 00:17:20.476 }' 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.476 23:51:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.761 23:51:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:20.761 23:51:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.761 23:51:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.761 [2024-12-06 23:51:32.319849] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:20.761 [2024-12-06 23:51:32.319902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.761 [2024-12-06 23:51:32.319921] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:20.761 [2024-12-06 23:51:32.319930] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.761 [2024-12-06 23:51:32.320387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.761 [2024-12-06 23:51:32.320408] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:20.761 [2024-12-06 23:51:32.320485] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:20.761 [2024-12-06 23:51:32.320499] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:20.761 [2024-12-06 23:51:32.320508] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:20.761 [2024-12-06 23:51:32.320532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:21.025 [2024-12-06 23:51:32.334997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:21.025 spare 00:17:21.025 23:51:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.025 [2024-12-06 23:51:32.336790] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:21.025 23:51:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.965 "name": "raid_bdev1", 00:17:21.965 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:21.965 "strip_size_kb": 0, 00:17:21.965 "state": "online", 00:17:21.965 "raid_level": "raid1", 00:17:21.965 "superblock": true, 00:17:21.965 "num_base_bdevs": 2, 00:17:21.965 "num_base_bdevs_discovered": 2, 00:17:21.965 "num_base_bdevs_operational": 2, 00:17:21.965 "process": { 00:17:21.965 "type": "rebuild", 00:17:21.965 "target": "spare", 00:17:21.965 "progress": { 00:17:21.965 "blocks": 2560, 00:17:21.965 "percent": 32 00:17:21.965 } 00:17:21.965 }, 00:17:21.965 "base_bdevs_list": [ 00:17:21.965 { 00:17:21.965 "name": "spare", 00:17:21.965 "uuid": "23365b98-1a53-5d5f-a55a-2927553e08d4", 00:17:21.965 "is_configured": true, 00:17:21.965 "data_offset": 256, 00:17:21.965 "data_size": 7936 00:17:21.965 }, 00:17:21.965 { 00:17:21.965 "name": "BaseBdev2", 00:17:21.965 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:21.965 "is_configured": true, 00:17:21.965 "data_offset": 256, 00:17:21.965 "data_size": 7936 00:17:21.965 } 00:17:21.965 ] 00:17:21.965 }' 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.965 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.965 [2024-12-06 23:51:33.497041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.226 [2024-12-06 23:51:33.541419] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:22.226 [2024-12-06 23:51:33.541536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.226 [2024-12-06 23:51:33.541576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.226 [2024-12-06 23:51:33.541597] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.226 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.227 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.227 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.227 "name": "raid_bdev1", 00:17:22.227 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:22.227 "strip_size_kb": 0, 00:17:22.227 "state": "online", 00:17:22.227 "raid_level": "raid1", 00:17:22.227 "superblock": true, 00:17:22.227 "num_base_bdevs": 2, 00:17:22.227 "num_base_bdevs_discovered": 1, 00:17:22.227 "num_base_bdevs_operational": 1, 00:17:22.227 "base_bdevs_list": [ 00:17:22.227 { 00:17:22.227 "name": null, 00:17:22.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.227 "is_configured": false, 00:17:22.227 "data_offset": 0, 00:17:22.227 "data_size": 7936 00:17:22.227 }, 00:17:22.227 { 00:17:22.227 "name": "BaseBdev2", 00:17:22.227 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:22.227 "is_configured": true, 00:17:22.227 "data_offset": 256, 00:17:22.227 "data_size": 7936 00:17:22.227 } 00:17:22.227 ] 00:17:22.227 }' 00:17:22.227 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.227 23:51:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.487 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.487 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.487 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.487 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.487 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.748 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.748 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.748 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.748 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.749 "name": "raid_bdev1", 00:17:22.749 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:22.749 "strip_size_kb": 0, 00:17:22.749 "state": "online", 00:17:22.749 "raid_level": "raid1", 00:17:22.749 "superblock": true, 00:17:22.749 "num_base_bdevs": 2, 00:17:22.749 "num_base_bdevs_discovered": 1, 00:17:22.749 "num_base_bdevs_operational": 1, 00:17:22.749 "base_bdevs_list": [ 00:17:22.749 { 00:17:22.749 "name": null, 00:17:22.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.749 "is_configured": false, 00:17:22.749 "data_offset": 0, 00:17:22.749 "data_size": 7936 00:17:22.749 }, 00:17:22.749 { 00:17:22.749 "name": "BaseBdev2", 00:17:22.749 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:22.749 "is_configured": true, 00:17:22.749 "data_offset": 256, 00:17:22.749 "data_size": 7936 00:17:22.749 } 00:17:22.749 ] 00:17:22.749 }' 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.749 [2024-12-06 23:51:34.214267] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:22.749 [2024-12-06 23:51:34.214322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.749 [2024-12-06 23:51:34.214351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:22.749 [2024-12-06 23:51:34.214370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.749 [2024-12-06 23:51:34.214806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.749 [2024-12-06 23:51:34.214826] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:22.749 [2024-12-06 23:51:34.214918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:22.749 [2024-12-06 23:51:34.214932] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:22.749 [2024-12-06 23:51:34.214943] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:22.749 [2024-12-06 23:51:34.214953] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:22.749 BaseBdev1 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.749 23:51:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:23.698 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.698 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.698 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.699 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.699 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.699 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:23.699 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.699 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.699 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.699 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.699 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.699 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.699 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.699 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.699 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.965 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.965 "name": "raid_bdev1", 00:17:23.965 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:23.965 "strip_size_kb": 0, 00:17:23.965 "state": "online", 00:17:23.965 "raid_level": "raid1", 00:17:23.965 "superblock": true, 00:17:23.965 "num_base_bdevs": 2, 00:17:23.965 "num_base_bdevs_discovered": 1, 00:17:23.965 "num_base_bdevs_operational": 1, 00:17:23.965 "base_bdevs_list": [ 00:17:23.965 { 00:17:23.965 "name": null, 00:17:23.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.965 "is_configured": false, 00:17:23.965 "data_offset": 0, 00:17:23.965 "data_size": 7936 00:17:23.965 }, 00:17:23.965 { 00:17:23.965 "name": "BaseBdev2", 00:17:23.965 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:23.965 "is_configured": true, 00:17:23.965 "data_offset": 256, 00:17:23.965 "data_size": 7936 00:17:23.965 } 00:17:23.965 ] 00:17:23.965 }' 00:17:23.965 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.965 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.225 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.225 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.225 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.225 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.225 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.225 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.225 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.225 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.225 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.225 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.225 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.225 "name": "raid_bdev1", 00:17:24.225 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:24.225 "strip_size_kb": 0, 00:17:24.225 "state": "online", 00:17:24.225 "raid_level": "raid1", 00:17:24.226 "superblock": true, 00:17:24.226 "num_base_bdevs": 2, 00:17:24.226 "num_base_bdevs_discovered": 1, 00:17:24.226 "num_base_bdevs_operational": 1, 00:17:24.226 "base_bdevs_list": [ 00:17:24.226 { 00:17:24.226 "name": null, 00:17:24.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.226 "is_configured": false, 00:17:24.226 "data_offset": 0, 00:17:24.226 "data_size": 7936 00:17:24.226 }, 00:17:24.226 { 00:17:24.226 "name": "BaseBdev2", 00:17:24.226 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:24.226 "is_configured": true, 00:17:24.226 "data_offset": 256, 00:17:24.226 "data_size": 7936 00:17:24.226 } 00:17:24.226 ] 00:17:24.226 }' 00:17:24.226 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.485 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.485 [2024-12-06 23:51:35.851507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.485 [2024-12-06 23:51:35.851716] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:24.486 [2024-12-06 23:51:35.851777] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:24.486 request: 00:17:24.486 { 00:17:24.486 "base_bdev": "BaseBdev1", 00:17:24.486 "raid_bdev": "raid_bdev1", 00:17:24.486 "method": "bdev_raid_add_base_bdev", 00:17:24.486 "req_id": 1 00:17:24.486 } 00:17:24.486 Got JSON-RPC error response 00:17:24.486 response: 00:17:24.486 { 00:17:24.486 "code": -22, 00:17:24.486 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:24.486 } 00:17:24.486 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:24.486 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:24.486 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.486 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.486 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.486 23:51:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.423 "name": "raid_bdev1", 00:17:25.423 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:25.423 "strip_size_kb": 0, 00:17:25.423 "state": "online", 00:17:25.423 "raid_level": "raid1", 00:17:25.423 "superblock": true, 00:17:25.423 "num_base_bdevs": 2, 00:17:25.423 "num_base_bdevs_discovered": 1, 00:17:25.423 "num_base_bdevs_operational": 1, 00:17:25.423 "base_bdevs_list": [ 00:17:25.423 { 00:17:25.423 "name": null, 00:17:25.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.423 "is_configured": false, 00:17:25.423 "data_offset": 0, 00:17:25.423 "data_size": 7936 00:17:25.423 }, 00:17:25.423 { 00:17:25.423 "name": "BaseBdev2", 00:17:25.423 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:25.423 "is_configured": true, 00:17:25.423 "data_offset": 256, 00:17:25.423 "data_size": 7936 00:17:25.423 } 00:17:25.423 ] 00:17:25.423 }' 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.423 23:51:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.993 "name": "raid_bdev1", 00:17:25.993 "uuid": "0c35fca9-2056-492b-bf7f-033fdc9f9ecb", 00:17:25.993 "strip_size_kb": 0, 00:17:25.993 "state": "online", 00:17:25.993 "raid_level": "raid1", 00:17:25.993 "superblock": true, 00:17:25.993 "num_base_bdevs": 2, 00:17:25.993 "num_base_bdevs_discovered": 1, 00:17:25.993 "num_base_bdevs_operational": 1, 00:17:25.993 "base_bdevs_list": [ 00:17:25.993 { 00:17:25.993 "name": null, 00:17:25.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.993 "is_configured": false, 00:17:25.993 "data_offset": 0, 00:17:25.993 "data_size": 7936 00:17:25.993 }, 00:17:25.993 { 00:17:25.993 "name": "BaseBdev2", 00:17:25.993 "uuid": "da747b1e-40a4-5d32-a5da-65d7d1c2ff8a", 00:17:25.993 "is_configured": true, 00:17:25.993 "data_offset": 256, 00:17:25.993 "data_size": 7936 00:17:25.993 } 00:17:25.993 ] 00:17:25.993 }' 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86403 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86403 ']' 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86403 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86403 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.993 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86403' 00:17:25.993 killing process with pid 86403 00:17:25.993 Received shutdown signal, test time was about 60.000000 seconds 00:17:25.993 00:17:25.993 Latency(us) 00:17:25.993 [2024-12-06T23:51:37.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.994 [2024-12-06T23:51:37.557Z] =================================================================================================================== 00:17:25.994 [2024-12-06T23:51:37.557Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:25.994 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86403 00:17:25.994 [2024-12-06 23:51:37.531542] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:25.994 [2024-12-06 23:51:37.531640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.994 [2024-12-06 23:51:37.531692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.994 [2024-12-06 23:51:37.531703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:25.994 23:51:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86403 00:17:26.254 [2024-12-06 23:51:37.814497] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.639 23:51:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:27.639 00:17:27.639 real 0m19.904s 00:17:27.639 user 0m26.078s 00:17:27.639 sys 0m2.716s 00:17:27.639 23:51:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.639 ************************************ 00:17:27.639 END TEST raid_rebuild_test_sb_4k 00:17:27.639 ************************************ 00:17:27.639 23:51:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.639 23:51:38 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:27.639 23:51:38 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:27.639 23:51:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:27.639 23:51:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.639 23:51:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.639 ************************************ 00:17:27.639 START TEST raid_state_function_test_sb_md_separate 00:17:27.639 ************************************ 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87102 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87102' 00:17:27.639 Process raid pid: 87102 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87102 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87102 ']' 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.639 23:51:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.639 [2024-12-06 23:51:39.025416] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:17:27.639 [2024-12-06 23:51:39.025526] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.900 [2024-12-06 23:51:39.205576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.900 [2024-12-06 23:51:39.310542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.160 [2024-12-06 23:51:39.512284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.160 [2024-12-06 23:51:39.512320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.421 [2024-12-06 23:51:39.845232] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:28.421 [2024-12-06 23:51:39.845283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:28.421 [2024-12-06 23:51:39.845293] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.421 [2024-12-06 23:51:39.845302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.421 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.421 "name": "Existed_Raid", 00:17:28.421 "uuid": "17f77d99-3b96-44e3-8bde-e5761da622c9", 00:17:28.421 "strip_size_kb": 0, 00:17:28.421 "state": "configuring", 00:17:28.421 "raid_level": "raid1", 00:17:28.421 "superblock": true, 00:17:28.421 "num_base_bdevs": 2, 00:17:28.421 "num_base_bdevs_discovered": 0, 00:17:28.421 "num_base_bdevs_operational": 2, 00:17:28.421 "base_bdevs_list": [ 00:17:28.422 { 00:17:28.422 "name": "BaseBdev1", 00:17:28.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.422 "is_configured": false, 00:17:28.422 "data_offset": 0, 00:17:28.422 "data_size": 0 00:17:28.422 }, 00:17:28.422 { 00:17:28.422 "name": "BaseBdev2", 00:17:28.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.422 "is_configured": false, 00:17:28.422 "data_offset": 0, 00:17:28.422 "data_size": 0 00:17:28.422 } 00:17:28.422 ] 00:17:28.422 }' 00:17:28.422 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.422 23:51:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.992 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:28.992 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.992 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.992 [2024-12-06 23:51:40.276476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:28.992 [2024-12-06 23:51:40.276557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:28.992 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.993 [2024-12-06 23:51:40.288452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:28.993 [2024-12-06 23:51:40.288533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:28.993 [2024-12-06 23:51:40.288559] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.993 [2024-12-06 23:51:40.288583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.993 [2024-12-06 23:51:40.336637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.993 BaseBdev1 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.993 [ 00:17:28.993 { 00:17:28.993 "name": "BaseBdev1", 00:17:28.993 "aliases": [ 00:17:28.993 "43124e84-93ba-4577-9a08-dbb2d8066786" 00:17:28.993 ], 00:17:28.993 "product_name": "Malloc disk", 00:17:28.993 "block_size": 4096, 00:17:28.993 "num_blocks": 8192, 00:17:28.993 "uuid": "43124e84-93ba-4577-9a08-dbb2d8066786", 00:17:28.993 "md_size": 32, 00:17:28.993 "md_interleave": false, 00:17:28.993 "dif_type": 0, 00:17:28.993 "assigned_rate_limits": { 00:17:28.993 "rw_ios_per_sec": 0, 00:17:28.993 "rw_mbytes_per_sec": 0, 00:17:28.993 "r_mbytes_per_sec": 0, 00:17:28.993 "w_mbytes_per_sec": 0 00:17:28.993 }, 00:17:28.993 "claimed": true, 00:17:28.993 "claim_type": "exclusive_write", 00:17:28.993 "zoned": false, 00:17:28.993 "supported_io_types": { 00:17:28.993 "read": true, 00:17:28.993 "write": true, 00:17:28.993 "unmap": true, 00:17:28.993 "flush": true, 00:17:28.993 "reset": true, 00:17:28.993 "nvme_admin": false, 00:17:28.993 "nvme_io": false, 00:17:28.993 "nvme_io_md": false, 00:17:28.993 "write_zeroes": true, 00:17:28.993 "zcopy": true, 00:17:28.993 "get_zone_info": false, 00:17:28.993 "zone_management": false, 00:17:28.993 "zone_append": false, 00:17:28.993 "compare": false, 00:17:28.993 "compare_and_write": false, 00:17:28.993 "abort": true, 00:17:28.993 "seek_hole": false, 00:17:28.993 "seek_data": false, 00:17:28.993 "copy": true, 00:17:28.993 "nvme_iov_md": false 00:17:28.993 }, 00:17:28.993 "memory_domains": [ 00:17:28.993 { 00:17:28.993 "dma_device_id": "system", 00:17:28.993 "dma_device_type": 1 00:17:28.993 }, 00:17:28.993 { 00:17:28.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.993 "dma_device_type": 2 00:17:28.993 } 00:17:28.993 ], 00:17:28.993 "driver_specific": {} 00:17:28.993 } 00:17:28.993 ] 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.993 "name": "Existed_Raid", 00:17:28.993 "uuid": "d9142524-91e0-4626-9d3e-1da9ede49db4", 00:17:28.993 "strip_size_kb": 0, 00:17:28.993 "state": "configuring", 00:17:28.993 "raid_level": "raid1", 00:17:28.993 "superblock": true, 00:17:28.993 "num_base_bdevs": 2, 00:17:28.993 "num_base_bdevs_discovered": 1, 00:17:28.993 "num_base_bdevs_operational": 2, 00:17:28.993 "base_bdevs_list": [ 00:17:28.993 { 00:17:28.993 "name": "BaseBdev1", 00:17:28.993 "uuid": "43124e84-93ba-4577-9a08-dbb2d8066786", 00:17:28.993 "is_configured": true, 00:17:28.993 "data_offset": 256, 00:17:28.993 "data_size": 7936 00:17:28.993 }, 00:17:28.993 { 00:17:28.993 "name": "BaseBdev2", 00:17:28.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.993 "is_configured": false, 00:17:28.993 "data_offset": 0, 00:17:28.993 "data_size": 0 00:17:28.993 } 00:17:28.993 ] 00:17:28.993 }' 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.993 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.564 [2024-12-06 23:51:40.851792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.564 [2024-12-06 23:51:40.851872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.564 [2024-12-06 23:51:40.863808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.564 [2024-12-06 23:51:40.865549] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.564 [2024-12-06 23:51:40.865595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.564 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.564 "name": "Existed_Raid", 00:17:29.564 "uuid": "ec519a43-f63a-401f-beee-a29f1f3a15b4", 00:17:29.564 "strip_size_kb": 0, 00:17:29.564 "state": "configuring", 00:17:29.564 "raid_level": "raid1", 00:17:29.564 "superblock": true, 00:17:29.565 "num_base_bdevs": 2, 00:17:29.565 "num_base_bdevs_discovered": 1, 00:17:29.565 "num_base_bdevs_operational": 2, 00:17:29.565 "base_bdevs_list": [ 00:17:29.565 { 00:17:29.565 "name": "BaseBdev1", 00:17:29.565 "uuid": "43124e84-93ba-4577-9a08-dbb2d8066786", 00:17:29.565 "is_configured": true, 00:17:29.565 "data_offset": 256, 00:17:29.565 "data_size": 7936 00:17:29.565 }, 00:17:29.565 { 00:17:29.565 "name": "BaseBdev2", 00:17:29.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.565 "is_configured": false, 00:17:29.565 "data_offset": 0, 00:17:29.565 "data_size": 0 00:17:29.565 } 00:17:29.565 ] 00:17:29.565 }' 00:17:29.565 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.565 23:51:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.825 [2024-12-06 23:51:41.355109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:29.825 [2024-12-06 23:51:41.355450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:29.825 [2024-12-06 23:51:41.355511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:29.825 [2024-12-06 23:51:41.355615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:29.825 [2024-12-06 23:51:41.355796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:29.825 [2024-12-06 23:51:41.355845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:29.825 [2024-12-06 23:51:41.356002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.825 BaseBdev2 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.825 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.825 [ 00:17:29.825 { 00:17:29.825 "name": "BaseBdev2", 00:17:29.825 "aliases": [ 00:17:29.825 "018b8bca-3dde-47e3-9e22-0e251485d1d0" 00:17:29.825 ], 00:17:30.086 "product_name": "Malloc disk", 00:17:30.086 "block_size": 4096, 00:17:30.086 "num_blocks": 8192, 00:17:30.086 "uuid": "018b8bca-3dde-47e3-9e22-0e251485d1d0", 00:17:30.086 "md_size": 32, 00:17:30.086 "md_interleave": false, 00:17:30.086 "dif_type": 0, 00:17:30.086 "assigned_rate_limits": { 00:17:30.086 "rw_ios_per_sec": 0, 00:17:30.086 "rw_mbytes_per_sec": 0, 00:17:30.086 "r_mbytes_per_sec": 0, 00:17:30.086 "w_mbytes_per_sec": 0 00:17:30.086 }, 00:17:30.086 "claimed": true, 00:17:30.086 "claim_type": "exclusive_write", 00:17:30.086 "zoned": false, 00:17:30.086 "supported_io_types": { 00:17:30.086 "read": true, 00:17:30.086 "write": true, 00:17:30.086 "unmap": true, 00:17:30.086 "flush": true, 00:17:30.086 "reset": true, 00:17:30.086 "nvme_admin": false, 00:17:30.086 "nvme_io": false, 00:17:30.086 "nvme_io_md": false, 00:17:30.086 "write_zeroes": true, 00:17:30.086 "zcopy": true, 00:17:30.086 "get_zone_info": false, 00:17:30.086 "zone_management": false, 00:17:30.086 "zone_append": false, 00:17:30.086 "compare": false, 00:17:30.086 "compare_and_write": false, 00:17:30.086 "abort": true, 00:17:30.086 "seek_hole": false, 00:17:30.086 "seek_data": false, 00:17:30.086 "copy": true, 00:17:30.086 "nvme_iov_md": false 00:17:30.086 }, 00:17:30.086 "memory_domains": [ 00:17:30.086 { 00:17:30.086 "dma_device_id": "system", 00:17:30.086 "dma_device_type": 1 00:17:30.086 }, 00:17:30.086 { 00:17:30.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.086 "dma_device_type": 2 00:17:30.086 } 00:17:30.086 ], 00:17:30.086 "driver_specific": {} 00:17:30.086 } 00:17:30.086 ] 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.086 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.086 "name": "Existed_Raid", 00:17:30.086 "uuid": "ec519a43-f63a-401f-beee-a29f1f3a15b4", 00:17:30.086 "strip_size_kb": 0, 00:17:30.086 "state": "online", 00:17:30.086 "raid_level": "raid1", 00:17:30.086 "superblock": true, 00:17:30.086 "num_base_bdevs": 2, 00:17:30.086 "num_base_bdevs_discovered": 2, 00:17:30.087 "num_base_bdevs_operational": 2, 00:17:30.087 "base_bdevs_list": [ 00:17:30.087 { 00:17:30.087 "name": "BaseBdev1", 00:17:30.087 "uuid": "43124e84-93ba-4577-9a08-dbb2d8066786", 00:17:30.087 "is_configured": true, 00:17:30.087 "data_offset": 256, 00:17:30.087 "data_size": 7936 00:17:30.087 }, 00:17:30.087 { 00:17:30.087 "name": "BaseBdev2", 00:17:30.087 "uuid": "018b8bca-3dde-47e3-9e22-0e251485d1d0", 00:17:30.087 "is_configured": true, 00:17:30.087 "data_offset": 256, 00:17:30.087 "data_size": 7936 00:17:30.087 } 00:17:30.087 ] 00:17:30.087 }' 00:17:30.087 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.087 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.347 [2024-12-06 23:51:41.858540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:30.347 "name": "Existed_Raid", 00:17:30.347 "aliases": [ 00:17:30.347 "ec519a43-f63a-401f-beee-a29f1f3a15b4" 00:17:30.347 ], 00:17:30.347 "product_name": "Raid Volume", 00:17:30.347 "block_size": 4096, 00:17:30.347 "num_blocks": 7936, 00:17:30.347 "uuid": "ec519a43-f63a-401f-beee-a29f1f3a15b4", 00:17:30.347 "md_size": 32, 00:17:30.347 "md_interleave": false, 00:17:30.347 "dif_type": 0, 00:17:30.347 "assigned_rate_limits": { 00:17:30.347 "rw_ios_per_sec": 0, 00:17:30.347 "rw_mbytes_per_sec": 0, 00:17:30.347 "r_mbytes_per_sec": 0, 00:17:30.347 "w_mbytes_per_sec": 0 00:17:30.347 }, 00:17:30.347 "claimed": false, 00:17:30.347 "zoned": false, 00:17:30.347 "supported_io_types": { 00:17:30.347 "read": true, 00:17:30.347 "write": true, 00:17:30.347 "unmap": false, 00:17:30.347 "flush": false, 00:17:30.347 "reset": true, 00:17:30.347 "nvme_admin": false, 00:17:30.347 "nvme_io": false, 00:17:30.347 "nvme_io_md": false, 00:17:30.347 "write_zeroes": true, 00:17:30.347 "zcopy": false, 00:17:30.347 "get_zone_info": false, 00:17:30.347 "zone_management": false, 00:17:30.347 "zone_append": false, 00:17:30.347 "compare": false, 00:17:30.347 "compare_and_write": false, 00:17:30.347 "abort": false, 00:17:30.347 "seek_hole": false, 00:17:30.347 "seek_data": false, 00:17:30.347 "copy": false, 00:17:30.347 "nvme_iov_md": false 00:17:30.347 }, 00:17:30.347 "memory_domains": [ 00:17:30.347 { 00:17:30.347 "dma_device_id": "system", 00:17:30.347 "dma_device_type": 1 00:17:30.347 }, 00:17:30.347 { 00:17:30.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.347 "dma_device_type": 2 00:17:30.347 }, 00:17:30.347 { 00:17:30.347 "dma_device_id": "system", 00:17:30.347 "dma_device_type": 1 00:17:30.347 }, 00:17:30.347 { 00:17:30.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.347 "dma_device_type": 2 00:17:30.347 } 00:17:30.347 ], 00:17:30.347 "driver_specific": { 00:17:30.347 "raid": { 00:17:30.347 "uuid": "ec519a43-f63a-401f-beee-a29f1f3a15b4", 00:17:30.347 "strip_size_kb": 0, 00:17:30.347 "state": "online", 00:17:30.347 "raid_level": "raid1", 00:17:30.347 "superblock": true, 00:17:30.347 "num_base_bdevs": 2, 00:17:30.347 "num_base_bdevs_discovered": 2, 00:17:30.347 "num_base_bdevs_operational": 2, 00:17:30.347 "base_bdevs_list": [ 00:17:30.347 { 00:17:30.347 "name": "BaseBdev1", 00:17:30.347 "uuid": "43124e84-93ba-4577-9a08-dbb2d8066786", 00:17:30.347 "is_configured": true, 00:17:30.347 "data_offset": 256, 00:17:30.347 "data_size": 7936 00:17:30.347 }, 00:17:30.347 { 00:17:30.347 "name": "BaseBdev2", 00:17:30.347 "uuid": "018b8bca-3dde-47e3-9e22-0e251485d1d0", 00:17:30.347 "is_configured": true, 00:17:30.347 "data_offset": 256, 00:17:30.347 "data_size": 7936 00:17:30.347 } 00:17:30.347 ] 00:17:30.347 } 00:17:30.347 } 00:17:30.347 }' 00:17:30.347 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:30.608 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:30.608 BaseBdev2' 00:17:30.608 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.608 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:30.608 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.608 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:30.608 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.608 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.608 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.608 23:51:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.608 [2024-12-06 23:51:42.065964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.608 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.869 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.869 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.869 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.869 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.869 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.869 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.869 "name": "Existed_Raid", 00:17:30.869 "uuid": "ec519a43-f63a-401f-beee-a29f1f3a15b4", 00:17:30.869 "strip_size_kb": 0, 00:17:30.869 "state": "online", 00:17:30.869 "raid_level": "raid1", 00:17:30.869 "superblock": true, 00:17:30.869 "num_base_bdevs": 2, 00:17:30.869 "num_base_bdevs_discovered": 1, 00:17:30.869 "num_base_bdevs_operational": 1, 00:17:30.869 "base_bdevs_list": [ 00:17:30.869 { 00:17:30.869 "name": null, 00:17:30.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.869 "is_configured": false, 00:17:30.869 "data_offset": 0, 00:17:30.869 "data_size": 7936 00:17:30.869 }, 00:17:30.869 { 00:17:30.869 "name": "BaseBdev2", 00:17:30.869 "uuid": "018b8bca-3dde-47e3-9e22-0e251485d1d0", 00:17:30.869 "is_configured": true, 00:17:30.869 "data_offset": 256, 00:17:30.869 "data_size": 7936 00:17:30.869 } 00:17:30.869 ] 00:17:30.869 }' 00:17:30.869 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.869 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.129 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:31.129 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:31.129 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:31.129 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.129 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.129 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.129 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.129 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:31.129 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:31.129 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:31.129 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.129 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.129 [2024-12-06 23:51:42.628308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:31.129 [2024-12-06 23:51:42.628457] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.389 [2024-12-06 23:51:42.723718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.389 [2024-12-06 23:51:42.723769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.389 [2024-12-06 23:51:42.723781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:31.389 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.389 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:31.389 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:31.389 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.389 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.389 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.389 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:31.389 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.389 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:31.389 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:31.389 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:31.389 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87102 00:17:31.390 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87102 ']' 00:17:31.390 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87102 00:17:31.390 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:31.390 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.390 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87102 00:17:31.390 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.390 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.390 killing process with pid 87102 00:17:31.390 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87102' 00:17:31.390 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87102 00:17:31.390 [2024-12-06 23:51:42.820421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.390 23:51:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87102 00:17:31.390 [2024-12-06 23:51:42.836888] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.331 23:51:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:32.331 00:17:32.331 real 0m4.970s 00:17:32.331 user 0m7.058s 00:17:32.331 sys 0m0.929s 00:17:32.331 23:51:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.331 ************************************ 00:17:32.331 END TEST raid_state_function_test_sb_md_separate 00:17:32.331 ************************************ 00:17:32.331 23:51:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.591 23:51:43 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:32.591 23:51:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:32.591 23:51:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.591 23:51:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.591 ************************************ 00:17:32.591 START TEST raid_superblock_test_md_separate 00:17:32.591 ************************************ 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87349 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87349 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87349 ']' 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.591 23:51:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.591 [2024-12-06 23:51:44.068826] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:17:32.591 [2024-12-06 23:51:44.068962] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87349 ] 00:17:32.851 [2024-12-06 23:51:44.243965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.851 [2024-12-06 23:51:44.351407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.111 [2024-12-06 23:51:44.541404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.111 [2024-12-06 23:51:44.541504] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.372 23:51:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.634 malloc1 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.634 [2024-12-06 23:51:44.949685] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.634 [2024-12-06 23:51:44.949779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.634 [2024-12-06 23:51:44.949816] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:33.634 [2024-12-06 23:51:44.949843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.634 [2024-12-06 23:51:44.951639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.634 [2024-12-06 23:51:44.951725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.634 pt1 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:33.634 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:33.635 23:51:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:33.635 23:51:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.635 23:51:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.635 malloc2 00:17:33.635 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.635 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:33.635 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.635 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.635 [2024-12-06 23:51:45.007914] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:33.635 [2024-12-06 23:51:45.007963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.635 [2024-12-06 23:51:45.007990] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:33.635 [2024-12-06 23:51:45.007998] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.635 [2024-12-06 23:51:45.009752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.635 [2024-12-06 23:51:45.009787] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:33.635 pt2 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.636 [2024-12-06 23:51:45.019921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.636 [2024-12-06 23:51:45.021582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.636 [2024-12-06 23:51:45.021772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:33.636 [2024-12-06 23:51:45.021788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:33.636 [2024-12-06 23:51:45.021859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:33.636 [2024-12-06 23:51:45.021965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:33.636 [2024-12-06 23:51:45.021976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:33.636 [2024-12-06 23:51:45.022083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.636 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.637 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.637 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.637 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.637 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.637 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.637 "name": "raid_bdev1", 00:17:33.637 "uuid": "23550767-7892-440b-b7cf-537a3a05eed7", 00:17:33.637 "strip_size_kb": 0, 00:17:33.637 "state": "online", 00:17:33.637 "raid_level": "raid1", 00:17:33.637 "superblock": true, 00:17:33.637 "num_base_bdevs": 2, 00:17:33.637 "num_base_bdevs_discovered": 2, 00:17:33.637 "num_base_bdevs_operational": 2, 00:17:33.637 "base_bdevs_list": [ 00:17:33.637 { 00:17:33.637 "name": "pt1", 00:17:33.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.637 "is_configured": true, 00:17:33.637 "data_offset": 256, 00:17:33.637 "data_size": 7936 00:17:33.637 }, 00:17:33.637 { 00:17:33.637 "name": "pt2", 00:17:33.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.637 "is_configured": true, 00:17:33.637 "data_offset": 256, 00:17:33.637 "data_size": 7936 00:17:33.637 } 00:17:33.637 ] 00:17:33.637 }' 00:17:33.637 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.637 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.903 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:33.903 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:33.903 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.903 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.903 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.903 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.903 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.903 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.903 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.903 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.903 [2024-12-06 23:51:45.459405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.162 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.162 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:34.162 "name": "raid_bdev1", 00:17:34.162 "aliases": [ 00:17:34.162 "23550767-7892-440b-b7cf-537a3a05eed7" 00:17:34.162 ], 00:17:34.162 "product_name": "Raid Volume", 00:17:34.162 "block_size": 4096, 00:17:34.162 "num_blocks": 7936, 00:17:34.162 "uuid": "23550767-7892-440b-b7cf-537a3a05eed7", 00:17:34.162 "md_size": 32, 00:17:34.162 "md_interleave": false, 00:17:34.162 "dif_type": 0, 00:17:34.162 "assigned_rate_limits": { 00:17:34.162 "rw_ios_per_sec": 0, 00:17:34.162 "rw_mbytes_per_sec": 0, 00:17:34.162 "r_mbytes_per_sec": 0, 00:17:34.162 "w_mbytes_per_sec": 0 00:17:34.162 }, 00:17:34.162 "claimed": false, 00:17:34.162 "zoned": false, 00:17:34.162 "supported_io_types": { 00:17:34.162 "read": true, 00:17:34.162 "write": true, 00:17:34.162 "unmap": false, 00:17:34.162 "flush": false, 00:17:34.162 "reset": true, 00:17:34.162 "nvme_admin": false, 00:17:34.162 "nvme_io": false, 00:17:34.162 "nvme_io_md": false, 00:17:34.162 "write_zeroes": true, 00:17:34.162 "zcopy": false, 00:17:34.162 "get_zone_info": false, 00:17:34.162 "zone_management": false, 00:17:34.162 "zone_append": false, 00:17:34.162 "compare": false, 00:17:34.162 "compare_and_write": false, 00:17:34.162 "abort": false, 00:17:34.162 "seek_hole": false, 00:17:34.162 "seek_data": false, 00:17:34.162 "copy": false, 00:17:34.162 "nvme_iov_md": false 00:17:34.162 }, 00:17:34.162 "memory_domains": [ 00:17:34.162 { 00:17:34.162 "dma_device_id": "system", 00:17:34.162 "dma_device_type": 1 00:17:34.162 }, 00:17:34.162 { 00:17:34.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.162 "dma_device_type": 2 00:17:34.162 }, 00:17:34.162 { 00:17:34.162 "dma_device_id": "system", 00:17:34.162 "dma_device_type": 1 00:17:34.162 }, 00:17:34.162 { 00:17:34.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.162 "dma_device_type": 2 00:17:34.162 } 00:17:34.162 ], 00:17:34.162 "driver_specific": { 00:17:34.162 "raid": { 00:17:34.162 "uuid": "23550767-7892-440b-b7cf-537a3a05eed7", 00:17:34.162 "strip_size_kb": 0, 00:17:34.162 "state": "online", 00:17:34.162 "raid_level": "raid1", 00:17:34.162 "superblock": true, 00:17:34.162 "num_base_bdevs": 2, 00:17:34.162 "num_base_bdevs_discovered": 2, 00:17:34.162 "num_base_bdevs_operational": 2, 00:17:34.162 "base_bdevs_list": [ 00:17:34.162 { 00:17:34.162 "name": "pt1", 00:17:34.162 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.162 "is_configured": true, 00:17:34.162 "data_offset": 256, 00:17:34.162 "data_size": 7936 00:17:34.162 }, 00:17:34.162 { 00:17:34.162 "name": "pt2", 00:17:34.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.162 "is_configured": true, 00:17:34.162 "data_offset": 256, 00:17:34.162 "data_size": 7936 00:17:34.162 } 00:17:34.162 ] 00:17:34.162 } 00:17:34.162 } 00:17:34.162 }' 00:17:34.162 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:34.163 pt2' 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:34.163 [2024-12-06 23:51:45.686988] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.163 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=23550767-7892-440b-b7cf-537a3a05eed7 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 23550767-7892-440b-b7cf-537a3a05eed7 ']' 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 [2024-12-06 23:51:45.734699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.423 [2024-12-06 23:51:45.734721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.423 [2024-12-06 23:51:45.734790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.423 [2024-12-06 23:51:45.734840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.423 [2024-12-06 23:51:45.734850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 [2024-12-06 23:51:45.874447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:34.423 [2024-12-06 23:51:45.876234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:34.423 [2024-12-06 23:51:45.876315] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:34.423 [2024-12-06 23:51:45.876360] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:34.423 [2024-12-06 23:51:45.876373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.423 [2024-12-06 23:51:45.876382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:34.423 request: 00:17:34.423 { 00:17:34.423 "name": "raid_bdev1", 00:17:34.423 "raid_level": "raid1", 00:17:34.423 "base_bdevs": [ 00:17:34.423 "malloc1", 00:17:34.423 "malloc2" 00:17:34.423 ], 00:17:34.423 "superblock": false, 00:17:34.423 "method": "bdev_raid_create", 00:17:34.423 "req_id": 1 00:17:34.423 } 00:17:34.423 Got JSON-RPC error response 00:17:34.423 response: 00:17:34.423 { 00:17:34.423 "code": -17, 00:17:34.423 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:34.423 } 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:34.423 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.424 [2024-12-06 23:51:45.938319] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:34.424 [2024-12-06 23:51:45.938428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.424 [2024-12-06 23:51:45.938459] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:34.424 [2024-12-06 23:51:45.938488] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.424 [2024-12-06 23:51:45.940342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.424 [2024-12-06 23:51:45.940435] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:34.424 [2024-12-06 23:51:45.940497] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:34.424 [2024-12-06 23:51:45.940576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:34.424 pt1 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.424 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.684 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.684 "name": "raid_bdev1", 00:17:34.684 "uuid": "23550767-7892-440b-b7cf-537a3a05eed7", 00:17:34.684 "strip_size_kb": 0, 00:17:34.684 "state": "configuring", 00:17:34.684 "raid_level": "raid1", 00:17:34.684 "superblock": true, 00:17:34.684 "num_base_bdevs": 2, 00:17:34.684 "num_base_bdevs_discovered": 1, 00:17:34.684 "num_base_bdevs_operational": 2, 00:17:34.684 "base_bdevs_list": [ 00:17:34.684 { 00:17:34.684 "name": "pt1", 00:17:34.684 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.684 "is_configured": true, 00:17:34.684 "data_offset": 256, 00:17:34.684 "data_size": 7936 00:17:34.684 }, 00:17:34.684 { 00:17:34.684 "name": null, 00:17:34.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.684 "is_configured": false, 00:17:34.684 "data_offset": 256, 00:17:34.684 "data_size": 7936 00:17:34.684 } 00:17:34.684 ] 00:17:34.684 }' 00:17:34.684 23:51:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.684 23:51:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.943 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:34.943 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:34.943 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.943 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:34.943 23:51:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.943 23:51:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.943 [2024-12-06 23:51:46.389609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:34.943 [2024-12-06 23:51:46.389679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.943 [2024-12-06 23:51:46.389696] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:34.943 [2024-12-06 23:51:46.389706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.943 [2024-12-06 23:51:46.389839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.943 [2024-12-06 23:51:46.389855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:34.943 [2024-12-06 23:51:46.389888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:34.943 [2024-12-06 23:51:46.389904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.943 [2024-12-06 23:51:46.389992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:34.943 [2024-12-06 23:51:46.390001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:34.943 [2024-12-06 23:51:46.390064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:34.943 [2024-12-06 23:51:46.390163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:34.943 [2024-12-06 23:51:46.390170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:34.943 [2024-12-06 23:51:46.390255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.943 pt2 00:17:34.943 23:51:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.944 "name": "raid_bdev1", 00:17:34.944 "uuid": "23550767-7892-440b-b7cf-537a3a05eed7", 00:17:34.944 "strip_size_kb": 0, 00:17:34.944 "state": "online", 00:17:34.944 "raid_level": "raid1", 00:17:34.944 "superblock": true, 00:17:34.944 "num_base_bdevs": 2, 00:17:34.944 "num_base_bdevs_discovered": 2, 00:17:34.944 "num_base_bdevs_operational": 2, 00:17:34.944 "base_bdevs_list": [ 00:17:34.944 { 00:17:34.944 "name": "pt1", 00:17:34.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.944 "is_configured": true, 00:17:34.944 "data_offset": 256, 00:17:34.944 "data_size": 7936 00:17:34.944 }, 00:17:34.944 { 00:17:34.944 "name": "pt2", 00:17:34.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.944 "is_configured": true, 00:17:34.944 "data_offset": 256, 00:17:34.944 "data_size": 7936 00:17:34.944 } 00:17:34.944 ] 00:17:34.944 }' 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.944 23:51:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.514 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.515 [2024-12-06 23:51:46.877007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:35.515 "name": "raid_bdev1", 00:17:35.515 "aliases": [ 00:17:35.515 "23550767-7892-440b-b7cf-537a3a05eed7" 00:17:35.515 ], 00:17:35.515 "product_name": "Raid Volume", 00:17:35.515 "block_size": 4096, 00:17:35.515 "num_blocks": 7936, 00:17:35.515 "uuid": "23550767-7892-440b-b7cf-537a3a05eed7", 00:17:35.515 "md_size": 32, 00:17:35.515 "md_interleave": false, 00:17:35.515 "dif_type": 0, 00:17:35.515 "assigned_rate_limits": { 00:17:35.515 "rw_ios_per_sec": 0, 00:17:35.515 "rw_mbytes_per_sec": 0, 00:17:35.515 "r_mbytes_per_sec": 0, 00:17:35.515 "w_mbytes_per_sec": 0 00:17:35.515 }, 00:17:35.515 "claimed": false, 00:17:35.515 "zoned": false, 00:17:35.515 "supported_io_types": { 00:17:35.515 "read": true, 00:17:35.515 "write": true, 00:17:35.515 "unmap": false, 00:17:35.515 "flush": false, 00:17:35.515 "reset": true, 00:17:35.515 "nvme_admin": false, 00:17:35.515 "nvme_io": false, 00:17:35.515 "nvme_io_md": false, 00:17:35.515 "write_zeroes": true, 00:17:35.515 "zcopy": false, 00:17:35.515 "get_zone_info": false, 00:17:35.515 "zone_management": false, 00:17:35.515 "zone_append": false, 00:17:35.515 "compare": false, 00:17:35.515 "compare_and_write": false, 00:17:35.515 "abort": false, 00:17:35.515 "seek_hole": false, 00:17:35.515 "seek_data": false, 00:17:35.515 "copy": false, 00:17:35.515 "nvme_iov_md": false 00:17:35.515 }, 00:17:35.515 "memory_domains": [ 00:17:35.515 { 00:17:35.515 "dma_device_id": "system", 00:17:35.515 "dma_device_type": 1 00:17:35.515 }, 00:17:35.515 { 00:17:35.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.515 "dma_device_type": 2 00:17:35.515 }, 00:17:35.515 { 00:17:35.515 "dma_device_id": "system", 00:17:35.515 "dma_device_type": 1 00:17:35.515 }, 00:17:35.515 { 00:17:35.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.515 "dma_device_type": 2 00:17:35.515 } 00:17:35.515 ], 00:17:35.515 "driver_specific": { 00:17:35.515 "raid": { 00:17:35.515 "uuid": "23550767-7892-440b-b7cf-537a3a05eed7", 00:17:35.515 "strip_size_kb": 0, 00:17:35.515 "state": "online", 00:17:35.515 "raid_level": "raid1", 00:17:35.515 "superblock": true, 00:17:35.515 "num_base_bdevs": 2, 00:17:35.515 "num_base_bdevs_discovered": 2, 00:17:35.515 "num_base_bdevs_operational": 2, 00:17:35.515 "base_bdevs_list": [ 00:17:35.515 { 00:17:35.515 "name": "pt1", 00:17:35.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.515 "is_configured": true, 00:17:35.515 "data_offset": 256, 00:17:35.515 "data_size": 7936 00:17:35.515 }, 00:17:35.515 { 00:17:35.515 "name": "pt2", 00:17:35.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.515 "is_configured": true, 00:17:35.515 "data_offset": 256, 00:17:35.515 "data_size": 7936 00:17:35.515 } 00:17:35.515 ] 00:17:35.515 } 00:17:35.515 } 00:17:35.515 }' 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:35.515 pt2' 00:17:35.515 23:51:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.515 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:35.776 [2024-12-06 23:51:47.120637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 23550767-7892-440b-b7cf-537a3a05eed7 '!=' 23550767-7892-440b-b7cf-537a3a05eed7 ']' 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.776 [2024-12-06 23:51:47.172328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.776 "name": "raid_bdev1", 00:17:35.776 "uuid": "23550767-7892-440b-b7cf-537a3a05eed7", 00:17:35.776 "strip_size_kb": 0, 00:17:35.776 "state": "online", 00:17:35.776 "raid_level": "raid1", 00:17:35.776 "superblock": true, 00:17:35.776 "num_base_bdevs": 2, 00:17:35.776 "num_base_bdevs_discovered": 1, 00:17:35.776 "num_base_bdevs_operational": 1, 00:17:35.776 "base_bdevs_list": [ 00:17:35.776 { 00:17:35.776 "name": null, 00:17:35.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.776 "is_configured": false, 00:17:35.776 "data_offset": 0, 00:17:35.776 "data_size": 7936 00:17:35.776 }, 00:17:35.776 { 00:17:35.776 "name": "pt2", 00:17:35.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.776 "is_configured": true, 00:17:35.776 "data_offset": 256, 00:17:35.776 "data_size": 7936 00:17:35.776 } 00:17:35.776 ] 00:17:35.776 }' 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.776 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.347 [2024-12-06 23:51:47.651607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.347 [2024-12-06 23:51:47.651702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.347 [2024-12-06 23:51:47.651780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.347 [2024-12-06 23:51:47.651836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.347 [2024-12-06 23:51:47.651886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.347 [2024-12-06 23:51:47.723499] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.347 [2024-12-06 23:51:47.723589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.347 [2024-12-06 23:51:47.723607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:36.347 [2024-12-06 23:51:47.723617] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.347 [2024-12-06 23:51:47.725505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.347 [2024-12-06 23:51:47.725548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.347 [2024-12-06 23:51:47.725590] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:36.347 [2024-12-06 23:51:47.725638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.347 [2024-12-06 23:51:47.725744] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:36.347 [2024-12-06 23:51:47.725756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:36.347 [2024-12-06 23:51:47.725827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:36.347 [2024-12-06 23:51:47.725956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:36.347 [2024-12-06 23:51:47.725985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:36.347 [2024-12-06 23:51:47.726062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.347 pt2 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.347 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.348 "name": "raid_bdev1", 00:17:36.348 "uuid": "23550767-7892-440b-b7cf-537a3a05eed7", 00:17:36.348 "strip_size_kb": 0, 00:17:36.348 "state": "online", 00:17:36.348 "raid_level": "raid1", 00:17:36.348 "superblock": true, 00:17:36.348 "num_base_bdevs": 2, 00:17:36.348 "num_base_bdevs_discovered": 1, 00:17:36.348 "num_base_bdevs_operational": 1, 00:17:36.348 "base_bdevs_list": [ 00:17:36.348 { 00:17:36.348 "name": null, 00:17:36.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.348 "is_configured": false, 00:17:36.348 "data_offset": 256, 00:17:36.348 "data_size": 7936 00:17:36.348 }, 00:17:36.348 { 00:17:36.348 "name": "pt2", 00:17:36.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.348 "is_configured": true, 00:17:36.348 "data_offset": 256, 00:17:36.348 "data_size": 7936 00:17:36.348 } 00:17:36.348 ] 00:17:36.348 }' 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.348 23:51:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.919 [2024-12-06 23:51:48.198650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.919 [2024-12-06 23:51:48.198689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.919 [2024-12-06 23:51:48.198749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.919 [2024-12-06 23:51:48.198792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.919 [2024-12-06 23:51:48.198800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.919 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.919 [2024-12-06 23:51:48.262567] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:36.919 [2024-12-06 23:51:48.262616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.919 [2024-12-06 23:51:48.262633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:36.919 [2024-12-06 23:51:48.262642] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.919 [2024-12-06 23:51:48.264545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.919 [2024-12-06 23:51:48.264633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:36.919 [2024-12-06 23:51:48.264701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:36.919 [2024-12-06 23:51:48.264749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:36.919 [2024-12-06 23:51:48.264872] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:36.919 [2024-12-06 23:51:48.264882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.919 [2024-12-06 23:51:48.264898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:36.919 [2024-12-06 23:51:48.264970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.919 [2024-12-06 23:51:48.265037] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:36.919 [2024-12-06 23:51:48.265044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:36.920 [2024-12-06 23:51:48.265101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:36.920 [2024-12-06 23:51:48.265192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:36.920 [2024-12-06 23:51:48.265202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:36.920 [2024-12-06 23:51:48.265309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.920 pt1 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.920 "name": "raid_bdev1", 00:17:36.920 "uuid": "23550767-7892-440b-b7cf-537a3a05eed7", 00:17:36.920 "strip_size_kb": 0, 00:17:36.920 "state": "online", 00:17:36.920 "raid_level": "raid1", 00:17:36.920 "superblock": true, 00:17:36.920 "num_base_bdevs": 2, 00:17:36.920 "num_base_bdevs_discovered": 1, 00:17:36.920 "num_base_bdevs_operational": 1, 00:17:36.920 "base_bdevs_list": [ 00:17:36.920 { 00:17:36.920 "name": null, 00:17:36.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.920 "is_configured": false, 00:17:36.920 "data_offset": 256, 00:17:36.920 "data_size": 7936 00:17:36.920 }, 00:17:36.920 { 00:17:36.920 "name": "pt2", 00:17:36.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.920 "is_configured": true, 00:17:36.920 "data_offset": 256, 00:17:36.920 "data_size": 7936 00:17:36.920 } 00:17:36.920 ] 00:17:36.920 }' 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.920 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.180 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:37.180 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:37.180 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.180 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.180 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.441 [2024-12-06 23:51:48.757905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 23550767-7892-440b-b7cf-537a3a05eed7 '!=' 23550767-7892-440b-b7cf-537a3a05eed7 ']' 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87349 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87349 ']' 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87349 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87349 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87349' 00:17:37.441 killing process with pid 87349 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87349 00:17:37.441 [2024-12-06 23:51:48.837037] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.441 [2024-12-06 23:51:48.837147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.441 [2024-12-06 23:51:48.837187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.441 [2024-12-06 23:51:48.837202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:37.441 23:51:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87349 00:17:37.702 [2024-12-06 23:51:49.045959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:38.645 23:51:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:38.645 00:17:38.645 real 0m6.131s 00:17:38.645 user 0m9.327s 00:17:38.645 sys 0m1.156s 00:17:38.645 23:51:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.645 ************************************ 00:17:38.645 END TEST raid_superblock_test_md_separate 00:17:38.645 ************************************ 00:17:38.645 23:51:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.645 23:51:50 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:38.645 23:51:50 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:38.645 23:51:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:38.645 23:51:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.645 23:51:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:38.645 ************************************ 00:17:38.645 START TEST raid_rebuild_test_sb_md_separate 00:17:38.645 ************************************ 00:17:38.645 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:38.645 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:38.645 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:38.645 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:38.645 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:38.645 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:38.645 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:38.645 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:38.645 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87676 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87676 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87676 ']' 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.646 23:51:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.907 [2024-12-06 23:51:50.286072] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:17:38.907 [2024-12-06 23:51:50.286241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:38.907 Zero copy mechanism will not be used. 00:17:38.907 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87676 ] 00:17:38.907 [2024-12-06 23:51:50.465512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.167 [2024-12-06 23:51:50.570022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.427 [2024-12-06 23:51:50.748394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.427 [2024-12-06 23:51:50.748535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.687 BaseBdev1_malloc 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.687 [2024-12-06 23:51:51.138801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:39.687 [2024-12-06 23:51:51.138897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.687 [2024-12-06 23:51:51.138935] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:39.687 [2024-12-06 23:51:51.138964] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.687 [2024-12-06 23:51:51.140848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.687 [2024-12-06 23:51:51.140931] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:39.687 BaseBdev1 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.687 BaseBdev2_malloc 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.687 [2024-12-06 23:51:51.188505] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:39.687 [2024-12-06 23:51:51.188559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.687 [2024-12-06 23:51:51.188577] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:39.687 [2024-12-06 23:51:51.188589] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.687 [2024-12-06 23:51:51.190346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.687 [2024-12-06 23:51:51.190385] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:39.687 BaseBdev2 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.687 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.947 spare_malloc 00:17:39.947 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.947 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:39.947 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.947 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.947 spare_delay 00:17:39.947 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.947 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:39.947 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.947 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.947 [2024-12-06 23:51:51.286547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:39.947 [2024-12-06 23:51:51.286604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.947 [2024-12-06 23:51:51.286624] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:39.947 [2024-12-06 23:51:51.286634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.947 [2024-12-06 23:51:51.288460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.947 [2024-12-06 23:51:51.288503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:39.947 spare 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.948 [2024-12-06 23:51:51.298565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.948 [2024-12-06 23:51:51.300370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.948 [2024-12-06 23:51:51.300663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:39.948 [2024-12-06 23:51:51.300695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:39.948 [2024-12-06 23:51:51.300772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:39.948 [2024-12-06 23:51:51.300901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:39.948 [2024-12-06 23:51:51.300912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:39.948 [2024-12-06 23:51:51.301014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.948 "name": "raid_bdev1", 00:17:39.948 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:39.948 "strip_size_kb": 0, 00:17:39.948 "state": "online", 00:17:39.948 "raid_level": "raid1", 00:17:39.948 "superblock": true, 00:17:39.948 "num_base_bdevs": 2, 00:17:39.948 "num_base_bdevs_discovered": 2, 00:17:39.948 "num_base_bdevs_operational": 2, 00:17:39.948 "base_bdevs_list": [ 00:17:39.948 { 00:17:39.948 "name": "BaseBdev1", 00:17:39.948 "uuid": "58d23976-5fd1-5a0a-93e8-c9e3b6be5f7e", 00:17:39.948 "is_configured": true, 00:17:39.948 "data_offset": 256, 00:17:39.948 "data_size": 7936 00:17:39.948 }, 00:17:39.948 { 00:17:39.948 "name": "BaseBdev2", 00:17:39.948 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:39.948 "is_configured": true, 00:17:39.948 "data_offset": 256, 00:17:39.948 "data_size": 7936 00:17:39.948 } 00:17:39.948 ] 00:17:39.948 }' 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.948 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.207 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:40.207 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:40.207 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.207 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.207 [2024-12-06 23:51:51.746035] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.207 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:40.467 23:51:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:40.467 [2024-12-06 23:51:51.993465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:40.467 /dev/nbd0 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:40.727 1+0 records in 00:17:40.727 1+0 records out 00:17:40.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421281 s, 9.7 MB/s 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:40.727 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:41.308 7936+0 records in 00:17:41.308 7936+0 records out 00:17:41.308 32505856 bytes (33 MB, 31 MiB) copied, 0.541567 s, 60.0 MB/s 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:41.308 [2024-12-06 23:51:52.803028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.308 [2024-12-06 23:51:52.847510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.308 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.566 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.566 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.566 "name": "raid_bdev1", 00:17:41.566 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:41.566 "strip_size_kb": 0, 00:17:41.566 "state": "online", 00:17:41.566 "raid_level": "raid1", 00:17:41.566 "superblock": true, 00:17:41.566 "num_base_bdevs": 2, 00:17:41.566 "num_base_bdevs_discovered": 1, 00:17:41.566 "num_base_bdevs_operational": 1, 00:17:41.566 "base_bdevs_list": [ 00:17:41.566 { 00:17:41.566 "name": null, 00:17:41.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.567 "is_configured": false, 00:17:41.567 "data_offset": 0, 00:17:41.567 "data_size": 7936 00:17:41.567 }, 00:17:41.567 { 00:17:41.567 "name": "BaseBdev2", 00:17:41.567 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:41.567 "is_configured": true, 00:17:41.567 "data_offset": 256, 00:17:41.567 "data_size": 7936 00:17:41.567 } 00:17:41.567 ] 00:17:41.567 }' 00:17:41.567 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.567 23:51:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.825 23:51:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:41.825 23:51:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.825 23:51:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.825 [2024-12-06 23:51:53.302719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:41.825 [2024-12-06 23:51:53.316972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:41.825 23:51:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.825 23:51:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:41.825 [2024-12-06 23:51:53.318805] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:42.764 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.764 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.764 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.764 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.764 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.024 "name": "raid_bdev1", 00:17:43.024 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:43.024 "strip_size_kb": 0, 00:17:43.024 "state": "online", 00:17:43.024 "raid_level": "raid1", 00:17:43.024 "superblock": true, 00:17:43.024 "num_base_bdevs": 2, 00:17:43.024 "num_base_bdevs_discovered": 2, 00:17:43.024 "num_base_bdevs_operational": 2, 00:17:43.024 "process": { 00:17:43.024 "type": "rebuild", 00:17:43.024 "target": "spare", 00:17:43.024 "progress": { 00:17:43.024 "blocks": 2560, 00:17:43.024 "percent": 32 00:17:43.024 } 00:17:43.024 }, 00:17:43.024 "base_bdevs_list": [ 00:17:43.024 { 00:17:43.024 "name": "spare", 00:17:43.024 "uuid": "545b6c9d-3c42-5d78-a2c9-9d15b712ced2", 00:17:43.024 "is_configured": true, 00:17:43.024 "data_offset": 256, 00:17:43.024 "data_size": 7936 00:17:43.024 }, 00:17:43.024 { 00:17:43.024 "name": "BaseBdev2", 00:17:43.024 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:43.024 "is_configured": true, 00:17:43.024 "data_offset": 256, 00:17:43.024 "data_size": 7936 00:17:43.024 } 00:17:43.024 ] 00:17:43.024 }' 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.024 [2024-12-06 23:51:54.474533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.024 [2024-12-06 23:51:54.523504] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:43.024 [2024-12-06 23:51:54.523612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.024 [2024-12-06 23:51:54.523645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.024 [2024-12-06 23:51:54.523688] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.024 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.283 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.283 "name": "raid_bdev1", 00:17:43.283 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:43.283 "strip_size_kb": 0, 00:17:43.283 "state": "online", 00:17:43.283 "raid_level": "raid1", 00:17:43.283 "superblock": true, 00:17:43.283 "num_base_bdevs": 2, 00:17:43.283 "num_base_bdevs_discovered": 1, 00:17:43.283 "num_base_bdevs_operational": 1, 00:17:43.283 "base_bdevs_list": [ 00:17:43.283 { 00:17:43.283 "name": null, 00:17:43.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.283 "is_configured": false, 00:17:43.283 "data_offset": 0, 00:17:43.283 "data_size": 7936 00:17:43.283 }, 00:17:43.283 { 00:17:43.283 "name": "BaseBdev2", 00:17:43.283 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:43.283 "is_configured": true, 00:17:43.283 "data_offset": 256, 00:17:43.283 "data_size": 7936 00:17:43.283 } 00:17:43.283 ] 00:17:43.283 }' 00:17:43.283 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.283 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.544 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.544 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.544 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.544 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.544 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.544 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.544 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.544 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.544 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.544 23:51:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.544 23:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.544 "name": "raid_bdev1", 00:17:43.544 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:43.544 "strip_size_kb": 0, 00:17:43.544 "state": "online", 00:17:43.544 "raid_level": "raid1", 00:17:43.544 "superblock": true, 00:17:43.544 "num_base_bdevs": 2, 00:17:43.544 "num_base_bdevs_discovered": 1, 00:17:43.544 "num_base_bdevs_operational": 1, 00:17:43.544 "base_bdevs_list": [ 00:17:43.544 { 00:17:43.544 "name": null, 00:17:43.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.544 "is_configured": false, 00:17:43.544 "data_offset": 0, 00:17:43.544 "data_size": 7936 00:17:43.544 }, 00:17:43.544 { 00:17:43.544 "name": "BaseBdev2", 00:17:43.544 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:43.544 "is_configured": true, 00:17:43.544 "data_offset": 256, 00:17:43.544 "data_size": 7936 00:17:43.544 } 00:17:43.544 ] 00:17:43.544 }' 00:17:43.544 23:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.544 23:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.544 23:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.805 23:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.805 23:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:43.805 23:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.805 23:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.805 [2024-12-06 23:51:55.130035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.805 [2024-12-06 23:51:55.142430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:43.805 23:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.805 23:51:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:43.805 [2024-12-06 23:51:55.144342] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.743 "name": "raid_bdev1", 00:17:44.743 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:44.743 "strip_size_kb": 0, 00:17:44.743 "state": "online", 00:17:44.743 "raid_level": "raid1", 00:17:44.743 "superblock": true, 00:17:44.743 "num_base_bdevs": 2, 00:17:44.743 "num_base_bdevs_discovered": 2, 00:17:44.743 "num_base_bdevs_operational": 2, 00:17:44.743 "process": { 00:17:44.743 "type": "rebuild", 00:17:44.743 "target": "spare", 00:17:44.743 "progress": { 00:17:44.743 "blocks": 2560, 00:17:44.743 "percent": 32 00:17:44.743 } 00:17:44.743 }, 00:17:44.743 "base_bdevs_list": [ 00:17:44.743 { 00:17:44.743 "name": "spare", 00:17:44.743 "uuid": "545b6c9d-3c42-5d78-a2c9-9d15b712ced2", 00:17:44.743 "is_configured": true, 00:17:44.743 "data_offset": 256, 00:17:44.743 "data_size": 7936 00:17:44.743 }, 00:17:44.743 { 00:17:44.743 "name": "BaseBdev2", 00:17:44.743 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:44.743 "is_configured": true, 00:17:44.743 "data_offset": 256, 00:17:44.743 "data_size": 7936 00:17:44.743 } 00:17:44.743 ] 00:17:44.743 }' 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:44.743 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=705 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.743 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.003 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.003 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.003 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.003 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.003 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.003 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.003 "name": "raid_bdev1", 00:17:45.003 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:45.003 "strip_size_kb": 0, 00:17:45.003 "state": "online", 00:17:45.003 "raid_level": "raid1", 00:17:45.003 "superblock": true, 00:17:45.003 "num_base_bdevs": 2, 00:17:45.003 "num_base_bdevs_discovered": 2, 00:17:45.003 "num_base_bdevs_operational": 2, 00:17:45.003 "process": { 00:17:45.003 "type": "rebuild", 00:17:45.003 "target": "spare", 00:17:45.003 "progress": { 00:17:45.003 "blocks": 2816, 00:17:45.003 "percent": 35 00:17:45.003 } 00:17:45.003 }, 00:17:45.003 "base_bdevs_list": [ 00:17:45.003 { 00:17:45.003 "name": "spare", 00:17:45.003 "uuid": "545b6c9d-3c42-5d78-a2c9-9d15b712ced2", 00:17:45.003 "is_configured": true, 00:17:45.003 "data_offset": 256, 00:17:45.003 "data_size": 7936 00:17:45.003 }, 00:17:45.003 { 00:17:45.003 "name": "BaseBdev2", 00:17:45.003 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:45.003 "is_configured": true, 00:17:45.003 "data_offset": 256, 00:17:45.003 "data_size": 7936 00:17:45.003 } 00:17:45.003 ] 00:17:45.003 }' 00:17:45.003 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.003 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.003 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.003 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.003 23:51:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.941 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.941 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.941 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.941 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.941 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.941 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.941 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.941 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.941 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.941 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.941 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.200 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.200 "name": "raid_bdev1", 00:17:46.200 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:46.200 "strip_size_kb": 0, 00:17:46.200 "state": "online", 00:17:46.200 "raid_level": "raid1", 00:17:46.200 "superblock": true, 00:17:46.200 "num_base_bdevs": 2, 00:17:46.200 "num_base_bdevs_discovered": 2, 00:17:46.200 "num_base_bdevs_operational": 2, 00:17:46.200 "process": { 00:17:46.200 "type": "rebuild", 00:17:46.200 "target": "spare", 00:17:46.200 "progress": { 00:17:46.200 "blocks": 5888, 00:17:46.200 "percent": 74 00:17:46.200 } 00:17:46.200 }, 00:17:46.200 "base_bdevs_list": [ 00:17:46.200 { 00:17:46.200 "name": "spare", 00:17:46.200 "uuid": "545b6c9d-3c42-5d78-a2c9-9d15b712ced2", 00:17:46.200 "is_configured": true, 00:17:46.200 "data_offset": 256, 00:17:46.200 "data_size": 7936 00:17:46.200 }, 00:17:46.200 { 00:17:46.200 "name": "BaseBdev2", 00:17:46.200 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:46.200 "is_configured": true, 00:17:46.200 "data_offset": 256, 00:17:46.200 "data_size": 7936 00:17:46.200 } 00:17:46.200 ] 00:17:46.200 }' 00:17:46.200 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.200 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.200 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.200 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.200 23:51:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:46.768 [2024-12-06 23:51:58.256265] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:46.768 [2024-12-06 23:51:58.256404] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:46.768 [2024-12-06 23:51:58.256506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.340 "name": "raid_bdev1", 00:17:47.340 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:47.340 "strip_size_kb": 0, 00:17:47.340 "state": "online", 00:17:47.340 "raid_level": "raid1", 00:17:47.340 "superblock": true, 00:17:47.340 "num_base_bdevs": 2, 00:17:47.340 "num_base_bdevs_discovered": 2, 00:17:47.340 "num_base_bdevs_operational": 2, 00:17:47.340 "base_bdevs_list": [ 00:17:47.340 { 00:17:47.340 "name": "spare", 00:17:47.340 "uuid": "545b6c9d-3c42-5d78-a2c9-9d15b712ced2", 00:17:47.340 "is_configured": true, 00:17:47.340 "data_offset": 256, 00:17:47.340 "data_size": 7936 00:17:47.340 }, 00:17:47.340 { 00:17:47.340 "name": "BaseBdev2", 00:17:47.340 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:47.340 "is_configured": true, 00:17:47.340 "data_offset": 256, 00:17:47.340 "data_size": 7936 00:17:47.340 } 00:17:47.340 ] 00:17:47.340 }' 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.340 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.340 "name": "raid_bdev1", 00:17:47.340 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:47.340 "strip_size_kb": 0, 00:17:47.340 "state": "online", 00:17:47.340 "raid_level": "raid1", 00:17:47.340 "superblock": true, 00:17:47.340 "num_base_bdevs": 2, 00:17:47.340 "num_base_bdevs_discovered": 2, 00:17:47.341 "num_base_bdevs_operational": 2, 00:17:47.341 "base_bdevs_list": [ 00:17:47.341 { 00:17:47.341 "name": "spare", 00:17:47.341 "uuid": "545b6c9d-3c42-5d78-a2c9-9d15b712ced2", 00:17:47.341 "is_configured": true, 00:17:47.341 "data_offset": 256, 00:17:47.341 "data_size": 7936 00:17:47.341 }, 00:17:47.341 { 00:17:47.341 "name": "BaseBdev2", 00:17:47.341 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:47.341 "is_configured": true, 00:17:47.341 "data_offset": 256, 00:17:47.341 "data_size": 7936 00:17:47.341 } 00:17:47.341 ] 00:17:47.341 }' 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.341 "name": "raid_bdev1", 00:17:47.341 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:47.341 "strip_size_kb": 0, 00:17:47.341 "state": "online", 00:17:47.341 "raid_level": "raid1", 00:17:47.341 "superblock": true, 00:17:47.341 "num_base_bdevs": 2, 00:17:47.341 "num_base_bdevs_discovered": 2, 00:17:47.341 "num_base_bdevs_operational": 2, 00:17:47.341 "base_bdevs_list": [ 00:17:47.341 { 00:17:47.341 "name": "spare", 00:17:47.341 "uuid": "545b6c9d-3c42-5d78-a2c9-9d15b712ced2", 00:17:47.341 "is_configured": true, 00:17:47.341 "data_offset": 256, 00:17:47.341 "data_size": 7936 00:17:47.341 }, 00:17:47.341 { 00:17:47.341 "name": "BaseBdev2", 00:17:47.341 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:47.341 "is_configured": true, 00:17:47.341 "data_offset": 256, 00:17:47.341 "data_size": 7936 00:17:47.341 } 00:17:47.341 ] 00:17:47.341 }' 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.341 23:51:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.938 [2024-12-06 23:51:59.267079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:47.938 [2024-12-06 23:51:59.267166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.938 [2024-12-06 23:51:59.267260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.938 [2024-12-06 23:51:59.267350] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.938 [2024-12-06 23:51:59.267405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:47.938 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:48.226 /dev/nbd0 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.226 1+0 records in 00:17:48.226 1+0 records out 00:17:48.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393296 s, 10.4 MB/s 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:48.226 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:48.227 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.227 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:48.227 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:48.227 /dev/nbd1 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.500 1+0 records in 00:17:48.500 1+0 records out 00:17:48.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370348 s, 11.1 MB/s 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.500 23:51:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:48.760 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:48.760 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:48.760 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:48.760 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:48.760 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:48.760 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:48.760 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:48.760 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:48.760 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.760 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.021 [2024-12-06 23:52:00.411344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:49.021 [2024-12-06 23:52:00.411402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.021 [2024-12-06 23:52:00.411423] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:49.021 [2024-12-06 23:52:00.411432] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.021 [2024-12-06 23:52:00.413406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.021 [2024-12-06 23:52:00.413444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:49.021 [2024-12-06 23:52:00.413507] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:49.021 [2024-12-06 23:52:00.413556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.021 [2024-12-06 23:52:00.413705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.021 spare 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.021 [2024-12-06 23:52:00.513599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:49.021 [2024-12-06 23:52:00.513633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:49.021 [2024-12-06 23:52:00.513748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:49.021 [2024-12-06 23:52:00.513897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:49.021 [2024-12-06 23:52:00.513912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:49.021 [2024-12-06 23:52:00.514064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.021 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.022 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.022 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.022 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.022 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.022 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.022 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.022 "name": "raid_bdev1", 00:17:49.022 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:49.022 "strip_size_kb": 0, 00:17:49.022 "state": "online", 00:17:49.022 "raid_level": "raid1", 00:17:49.022 "superblock": true, 00:17:49.022 "num_base_bdevs": 2, 00:17:49.022 "num_base_bdevs_discovered": 2, 00:17:49.022 "num_base_bdevs_operational": 2, 00:17:49.022 "base_bdevs_list": [ 00:17:49.022 { 00:17:49.022 "name": "spare", 00:17:49.022 "uuid": "545b6c9d-3c42-5d78-a2c9-9d15b712ced2", 00:17:49.022 "is_configured": true, 00:17:49.022 "data_offset": 256, 00:17:49.022 "data_size": 7936 00:17:49.022 }, 00:17:49.022 { 00:17:49.022 "name": "BaseBdev2", 00:17:49.022 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:49.022 "is_configured": true, 00:17:49.022 "data_offset": 256, 00:17:49.022 "data_size": 7936 00:17:49.022 } 00:17:49.022 ] 00:17:49.022 }' 00:17:49.022 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.022 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.591 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.591 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.591 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.591 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.591 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.591 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.591 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.591 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.591 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.591 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.591 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.591 "name": "raid_bdev1", 00:17:49.591 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:49.591 "strip_size_kb": 0, 00:17:49.591 "state": "online", 00:17:49.591 "raid_level": "raid1", 00:17:49.591 "superblock": true, 00:17:49.591 "num_base_bdevs": 2, 00:17:49.591 "num_base_bdevs_discovered": 2, 00:17:49.591 "num_base_bdevs_operational": 2, 00:17:49.591 "base_bdevs_list": [ 00:17:49.591 { 00:17:49.591 "name": "spare", 00:17:49.591 "uuid": "545b6c9d-3c42-5d78-a2c9-9d15b712ced2", 00:17:49.591 "is_configured": true, 00:17:49.591 "data_offset": 256, 00:17:49.591 "data_size": 7936 00:17:49.591 }, 00:17:49.591 { 00:17:49.591 "name": "BaseBdev2", 00:17:49.591 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:49.591 "is_configured": true, 00:17:49.591 "data_offset": 256, 00:17:49.591 "data_size": 7936 00:17:49.591 } 00:17:49.591 ] 00:17:49.591 }' 00:17:49.591 23:52:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.591 [2024-12-06 23:52:01.134132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.591 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.850 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.850 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.850 "name": "raid_bdev1", 00:17:49.850 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:49.850 "strip_size_kb": 0, 00:17:49.850 "state": "online", 00:17:49.850 "raid_level": "raid1", 00:17:49.850 "superblock": true, 00:17:49.850 "num_base_bdevs": 2, 00:17:49.850 "num_base_bdevs_discovered": 1, 00:17:49.850 "num_base_bdevs_operational": 1, 00:17:49.850 "base_bdevs_list": [ 00:17:49.850 { 00:17:49.850 "name": null, 00:17:49.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.850 "is_configured": false, 00:17:49.850 "data_offset": 0, 00:17:49.850 "data_size": 7936 00:17:49.850 }, 00:17:49.851 { 00:17:49.851 "name": "BaseBdev2", 00:17:49.851 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:49.851 "is_configured": true, 00:17:49.851 "data_offset": 256, 00:17:49.851 "data_size": 7936 00:17:49.851 } 00:17:49.851 ] 00:17:49.851 }' 00:17:49.851 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.851 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.110 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:50.110 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.110 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.110 [2024-12-06 23:52:01.597339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.110 [2024-12-06 23:52:01.597491] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.110 [2024-12-06 23:52:01.597508] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:50.110 [2024-12-06 23:52:01.597541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.110 [2024-12-06 23:52:01.611240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:50.111 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.111 23:52:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:50.111 [2024-12-06 23:52:01.613047] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:51.493 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.493 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.493 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.493 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.493 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.493 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.493 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.493 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.493 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.494 "name": "raid_bdev1", 00:17:51.494 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:51.494 "strip_size_kb": 0, 00:17:51.494 "state": "online", 00:17:51.494 "raid_level": "raid1", 00:17:51.494 "superblock": true, 00:17:51.494 "num_base_bdevs": 2, 00:17:51.494 "num_base_bdevs_discovered": 2, 00:17:51.494 "num_base_bdevs_operational": 2, 00:17:51.494 "process": { 00:17:51.494 "type": "rebuild", 00:17:51.494 "target": "spare", 00:17:51.494 "progress": { 00:17:51.494 "blocks": 2560, 00:17:51.494 "percent": 32 00:17:51.494 } 00:17:51.494 }, 00:17:51.494 "base_bdevs_list": [ 00:17:51.494 { 00:17:51.494 "name": "spare", 00:17:51.494 "uuid": "545b6c9d-3c42-5d78-a2c9-9d15b712ced2", 00:17:51.494 "is_configured": true, 00:17:51.494 "data_offset": 256, 00:17:51.494 "data_size": 7936 00:17:51.494 }, 00:17:51.494 { 00:17:51.494 "name": "BaseBdev2", 00:17:51.494 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:51.494 "is_configured": true, 00:17:51.494 "data_offset": 256, 00:17:51.494 "data_size": 7936 00:17:51.494 } 00:17:51.494 ] 00:17:51.494 }' 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.494 [2024-12-06 23:52:02.772870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.494 [2024-12-06 23:52:02.817771] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:51.494 [2024-12-06 23:52:02.817831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.494 [2024-12-06 23:52:02.817844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.494 [2024-12-06 23:52:02.817863] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.494 "name": "raid_bdev1", 00:17:51.494 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:51.494 "strip_size_kb": 0, 00:17:51.494 "state": "online", 00:17:51.494 "raid_level": "raid1", 00:17:51.494 "superblock": true, 00:17:51.494 "num_base_bdevs": 2, 00:17:51.494 "num_base_bdevs_discovered": 1, 00:17:51.494 "num_base_bdevs_operational": 1, 00:17:51.494 "base_bdevs_list": [ 00:17:51.494 { 00:17:51.494 "name": null, 00:17:51.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.494 "is_configured": false, 00:17:51.494 "data_offset": 0, 00:17:51.494 "data_size": 7936 00:17:51.494 }, 00:17:51.494 { 00:17:51.494 "name": "BaseBdev2", 00:17:51.494 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:51.494 "is_configured": true, 00:17:51.494 "data_offset": 256, 00:17:51.494 "data_size": 7936 00:17:51.494 } 00:17:51.494 ] 00:17:51.494 }' 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.494 23:52:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.754 23:52:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:51.754 23:52:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.754 23:52:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.754 [2024-12-06 23:52:03.252441] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:51.754 [2024-12-06 23:52:03.252499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.754 [2024-12-06 23:52:03.252525] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:51.754 [2024-12-06 23:52:03.252535] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.754 [2024-12-06 23:52:03.252803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.754 [2024-12-06 23:52:03.252833] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:51.754 [2024-12-06 23:52:03.252885] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:51.754 [2024-12-06 23:52:03.252897] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.754 [2024-12-06 23:52:03.252907] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:51.754 [2024-12-06 23:52:03.252934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.754 [2024-12-06 23:52:03.266021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:51.754 spare 00:17:51.754 23:52:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.754 23:52:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:51.754 [2024-12-06 23:52:03.267814] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:53.134 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.134 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.134 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.134 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.134 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.134 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.134 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.134 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.134 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.134 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.134 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.135 "name": "raid_bdev1", 00:17:53.135 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:53.135 "strip_size_kb": 0, 00:17:53.135 "state": "online", 00:17:53.135 "raid_level": "raid1", 00:17:53.135 "superblock": true, 00:17:53.135 "num_base_bdevs": 2, 00:17:53.135 "num_base_bdevs_discovered": 2, 00:17:53.135 "num_base_bdevs_operational": 2, 00:17:53.135 "process": { 00:17:53.135 "type": "rebuild", 00:17:53.135 "target": "spare", 00:17:53.135 "progress": { 00:17:53.135 "blocks": 2560, 00:17:53.135 "percent": 32 00:17:53.135 } 00:17:53.135 }, 00:17:53.135 "base_bdevs_list": [ 00:17:53.135 { 00:17:53.135 "name": "spare", 00:17:53.135 "uuid": "545b6c9d-3c42-5d78-a2c9-9d15b712ced2", 00:17:53.135 "is_configured": true, 00:17:53.135 "data_offset": 256, 00:17:53.135 "data_size": 7936 00:17:53.135 }, 00:17:53.135 { 00:17:53.135 "name": "BaseBdev2", 00:17:53.135 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:53.135 "is_configured": true, 00:17:53.135 "data_offset": 256, 00:17:53.135 "data_size": 7936 00:17:53.135 } 00:17:53.135 ] 00:17:53.135 }' 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.135 [2024-12-06 23:52:04.424512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.135 [2024-12-06 23:52:04.472366] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:53.135 [2024-12-06 23:52:04.472419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.135 [2024-12-06 23:52:04.472436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.135 [2024-12-06 23:52:04.472443] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.135 "name": "raid_bdev1", 00:17:53.135 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:53.135 "strip_size_kb": 0, 00:17:53.135 "state": "online", 00:17:53.135 "raid_level": "raid1", 00:17:53.135 "superblock": true, 00:17:53.135 "num_base_bdevs": 2, 00:17:53.135 "num_base_bdevs_discovered": 1, 00:17:53.135 "num_base_bdevs_operational": 1, 00:17:53.135 "base_bdevs_list": [ 00:17:53.135 { 00:17:53.135 "name": null, 00:17:53.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.135 "is_configured": false, 00:17:53.135 "data_offset": 0, 00:17:53.135 "data_size": 7936 00:17:53.135 }, 00:17:53.135 { 00:17:53.135 "name": "BaseBdev2", 00:17:53.135 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:53.135 "is_configured": true, 00:17:53.135 "data_offset": 256, 00:17:53.135 "data_size": 7936 00:17:53.135 } 00:17:53.135 ] 00:17:53.135 }' 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.135 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.395 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.395 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.395 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.395 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.395 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.395 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.395 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.395 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.395 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.654 23:52:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.654 "name": "raid_bdev1", 00:17:53.654 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:53.654 "strip_size_kb": 0, 00:17:53.654 "state": "online", 00:17:53.654 "raid_level": "raid1", 00:17:53.654 "superblock": true, 00:17:53.654 "num_base_bdevs": 2, 00:17:53.654 "num_base_bdevs_discovered": 1, 00:17:53.654 "num_base_bdevs_operational": 1, 00:17:53.654 "base_bdevs_list": [ 00:17:53.654 { 00:17:53.654 "name": null, 00:17:53.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.654 "is_configured": false, 00:17:53.654 "data_offset": 0, 00:17:53.654 "data_size": 7936 00:17:53.654 }, 00:17:53.654 { 00:17:53.654 "name": "BaseBdev2", 00:17:53.654 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:53.654 "is_configured": true, 00:17:53.654 "data_offset": 256, 00:17:53.654 "data_size": 7936 00:17:53.654 } 00:17:53.654 ] 00:17:53.654 }' 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.654 [2024-12-06 23:52:05.122108] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:53.654 [2024-12-06 23:52:05.122163] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.654 [2024-12-06 23:52:05.122184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:53.654 [2024-12-06 23:52:05.122193] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.654 [2024-12-06 23:52:05.122419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.654 [2024-12-06 23:52:05.122438] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:53.654 [2024-12-06 23:52:05.122501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:53.654 [2024-12-06 23:52:05.122513] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:53.654 [2024-12-06 23:52:05.122522] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:53.654 [2024-12-06 23:52:05.122533] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:53.654 BaseBdev1 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.654 23:52:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.593 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.854 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.854 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.854 "name": "raid_bdev1", 00:17:54.854 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:54.854 "strip_size_kb": 0, 00:17:54.854 "state": "online", 00:17:54.854 "raid_level": "raid1", 00:17:54.854 "superblock": true, 00:17:54.854 "num_base_bdevs": 2, 00:17:54.854 "num_base_bdevs_discovered": 1, 00:17:54.854 "num_base_bdevs_operational": 1, 00:17:54.854 "base_bdevs_list": [ 00:17:54.854 { 00:17:54.854 "name": null, 00:17:54.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.854 "is_configured": false, 00:17:54.854 "data_offset": 0, 00:17:54.854 "data_size": 7936 00:17:54.854 }, 00:17:54.854 { 00:17:54.854 "name": "BaseBdev2", 00:17:54.854 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:54.854 "is_configured": true, 00:17:54.854 "data_offset": 256, 00:17:54.854 "data_size": 7936 00:17:54.854 } 00:17:54.854 ] 00:17:54.854 }' 00:17:54.854 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.854 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.115 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.115 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.115 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:55.115 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:55.115 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.115 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.115 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.115 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.115 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.115 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.115 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.115 "name": "raid_bdev1", 00:17:55.115 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:55.115 "strip_size_kb": 0, 00:17:55.115 "state": "online", 00:17:55.115 "raid_level": "raid1", 00:17:55.115 "superblock": true, 00:17:55.115 "num_base_bdevs": 2, 00:17:55.115 "num_base_bdevs_discovered": 1, 00:17:55.115 "num_base_bdevs_operational": 1, 00:17:55.115 "base_bdevs_list": [ 00:17:55.115 { 00:17:55.115 "name": null, 00:17:55.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.115 "is_configured": false, 00:17:55.115 "data_offset": 0, 00:17:55.115 "data_size": 7936 00:17:55.115 }, 00:17:55.115 { 00:17:55.115 "name": "BaseBdev2", 00:17:55.115 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:55.115 "is_configured": true, 00:17:55.115 "data_offset": 256, 00:17:55.115 "data_size": 7936 00:17:55.115 } 00:17:55.115 ] 00:17:55.115 }' 00:17:55.115 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.376 [2024-12-06 23:52:06.759353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.376 [2024-12-06 23:52:06.759499] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:55.376 [2024-12-06 23:52:06.759513] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:55.376 request: 00:17:55.376 { 00:17:55.376 "base_bdev": "BaseBdev1", 00:17:55.376 "raid_bdev": "raid_bdev1", 00:17:55.376 "method": "bdev_raid_add_base_bdev", 00:17:55.376 "req_id": 1 00:17:55.376 } 00:17:55.376 Got JSON-RPC error response 00:17:55.376 response: 00:17:55.376 { 00:17:55.376 "code": -22, 00:17:55.376 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:55.376 } 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.376 23:52:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.318 "name": "raid_bdev1", 00:17:56.318 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:56.318 "strip_size_kb": 0, 00:17:56.318 "state": "online", 00:17:56.318 "raid_level": "raid1", 00:17:56.318 "superblock": true, 00:17:56.318 "num_base_bdevs": 2, 00:17:56.318 "num_base_bdevs_discovered": 1, 00:17:56.318 "num_base_bdevs_operational": 1, 00:17:56.318 "base_bdevs_list": [ 00:17:56.318 { 00:17:56.318 "name": null, 00:17:56.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.318 "is_configured": false, 00:17:56.318 "data_offset": 0, 00:17:56.318 "data_size": 7936 00:17:56.318 }, 00:17:56.318 { 00:17:56.318 "name": "BaseBdev2", 00:17:56.318 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:56.318 "is_configured": true, 00:17:56.318 "data_offset": 256, 00:17:56.318 "data_size": 7936 00:17:56.318 } 00:17:56.318 ] 00:17:56.318 }' 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.318 23:52:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.890 "name": "raid_bdev1", 00:17:56.890 "uuid": "abdd0d8a-7e48-4f1b-9f90-5d247edc7f72", 00:17:56.890 "strip_size_kb": 0, 00:17:56.890 "state": "online", 00:17:56.890 "raid_level": "raid1", 00:17:56.890 "superblock": true, 00:17:56.890 "num_base_bdevs": 2, 00:17:56.890 "num_base_bdevs_discovered": 1, 00:17:56.890 "num_base_bdevs_operational": 1, 00:17:56.890 "base_bdevs_list": [ 00:17:56.890 { 00:17:56.890 "name": null, 00:17:56.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.890 "is_configured": false, 00:17:56.890 "data_offset": 0, 00:17:56.890 "data_size": 7936 00:17:56.890 }, 00:17:56.890 { 00:17:56.890 "name": "BaseBdev2", 00:17:56.890 "uuid": "e364aae9-a6dd-5e44-a90a-f19c3710d5f9", 00:17:56.890 "is_configured": true, 00:17:56.890 "data_offset": 256, 00:17:56.890 "data_size": 7936 00:17:56.890 } 00:17:56.890 ] 00:17:56.890 }' 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87676 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87676 ']' 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87676 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87676 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.890 killing process with pid 87676 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87676' 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87676 00:17:56.890 Received shutdown signal, test time was about 60.000000 seconds 00:17:56.890 00:17:56.890 Latency(us) 00:17:56.890 [2024-12-06T23:52:08.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.890 [2024-12-06T23:52:08.453Z] =================================================================================================================== 00:17:56.890 [2024-12-06T23:52:08.453Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:56.890 [2024-12-06 23:52:08.391328] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:56.890 [2024-12-06 23:52:08.391442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.890 [2024-12-06 23:52:08.391496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.890 23:52:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87676 00:17:56.890 [2024-12-06 23:52:08.391507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:57.151 [2024-12-06 23:52:08.696216] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.536 23:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:58.536 00:17:58.536 real 0m19.552s 00:17:58.536 user 0m25.530s 00:17:58.536 sys 0m2.614s 00:17:58.536 23:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.536 23:52:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.536 ************************************ 00:17:58.536 END TEST raid_rebuild_test_sb_md_separate 00:17:58.536 ************************************ 00:17:58.536 23:52:09 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:58.536 23:52:09 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:58.536 23:52:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:58.536 23:52:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.536 23:52:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.536 ************************************ 00:17:58.536 START TEST raid_state_function_test_sb_md_interleaved 00:17:58.536 ************************************ 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88363 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88363' 00:17:58.536 Process raid pid: 88363 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88363 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88363 ']' 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.536 23:52:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.536 [2024-12-06 23:52:09.913546] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:17:58.536 [2024-12-06 23:52:09.913678] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.536 [2024-12-06 23:52:10.090356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.796 [2024-12-06 23:52:10.195087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.056 [2024-12-06 23:52:10.391960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.056 [2024-12-06 23:52:10.392006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.318 [2024-12-06 23:52:10.744882] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:59.318 [2024-12-06 23:52:10.744935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:59.318 [2024-12-06 23:52:10.744946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.318 [2024-12-06 23:52:10.744955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.318 "name": "Existed_Raid", 00:17:59.318 "uuid": "f56dfe02-1fd5-4911-b6a0-6f43e0697165", 00:17:59.318 "strip_size_kb": 0, 00:17:59.318 "state": "configuring", 00:17:59.318 "raid_level": "raid1", 00:17:59.318 "superblock": true, 00:17:59.318 "num_base_bdevs": 2, 00:17:59.318 "num_base_bdevs_discovered": 0, 00:17:59.318 "num_base_bdevs_operational": 2, 00:17:59.318 "base_bdevs_list": [ 00:17:59.318 { 00:17:59.318 "name": "BaseBdev1", 00:17:59.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.318 "is_configured": false, 00:17:59.318 "data_offset": 0, 00:17:59.318 "data_size": 0 00:17:59.318 }, 00:17:59.318 { 00:17:59.318 "name": "BaseBdev2", 00:17:59.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.318 "is_configured": false, 00:17:59.318 "data_offset": 0, 00:17:59.318 "data_size": 0 00:17:59.318 } 00:17:59.318 ] 00:17:59.318 }' 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.318 23:52:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.888 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:59.888 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.888 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.888 [2024-12-06 23:52:11.228044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:59.888 [2024-12-06 23:52:11.228080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:59.888 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.889 [2024-12-06 23:52:11.240023] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:59.889 [2024-12-06 23:52:11.240061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:59.889 [2024-12-06 23:52:11.240069] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.889 [2024-12-06 23:52:11.240079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.889 [2024-12-06 23:52:11.285430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:59.889 BaseBdev1 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.889 [ 00:17:59.889 { 00:17:59.889 "name": "BaseBdev1", 00:17:59.889 "aliases": [ 00:17:59.889 "27378fe3-f2d6-4903-8d5b-e53444c85395" 00:17:59.889 ], 00:17:59.889 "product_name": "Malloc disk", 00:17:59.889 "block_size": 4128, 00:17:59.889 "num_blocks": 8192, 00:17:59.889 "uuid": "27378fe3-f2d6-4903-8d5b-e53444c85395", 00:17:59.889 "md_size": 32, 00:17:59.889 "md_interleave": true, 00:17:59.889 "dif_type": 0, 00:17:59.889 "assigned_rate_limits": { 00:17:59.889 "rw_ios_per_sec": 0, 00:17:59.889 "rw_mbytes_per_sec": 0, 00:17:59.889 "r_mbytes_per_sec": 0, 00:17:59.889 "w_mbytes_per_sec": 0 00:17:59.889 }, 00:17:59.889 "claimed": true, 00:17:59.889 "claim_type": "exclusive_write", 00:17:59.889 "zoned": false, 00:17:59.889 "supported_io_types": { 00:17:59.889 "read": true, 00:17:59.889 "write": true, 00:17:59.889 "unmap": true, 00:17:59.889 "flush": true, 00:17:59.889 "reset": true, 00:17:59.889 "nvme_admin": false, 00:17:59.889 "nvme_io": false, 00:17:59.889 "nvme_io_md": false, 00:17:59.889 "write_zeroes": true, 00:17:59.889 "zcopy": true, 00:17:59.889 "get_zone_info": false, 00:17:59.889 "zone_management": false, 00:17:59.889 "zone_append": false, 00:17:59.889 "compare": false, 00:17:59.889 "compare_and_write": false, 00:17:59.889 "abort": true, 00:17:59.889 "seek_hole": false, 00:17:59.889 "seek_data": false, 00:17:59.889 "copy": true, 00:17:59.889 "nvme_iov_md": false 00:17:59.889 }, 00:17:59.889 "memory_domains": [ 00:17:59.889 { 00:17:59.889 "dma_device_id": "system", 00:17:59.889 "dma_device_type": 1 00:17:59.889 }, 00:17:59.889 { 00:17:59.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.889 "dma_device_type": 2 00:17:59.889 } 00:17:59.889 ], 00:17:59.889 "driver_specific": {} 00:17:59.889 } 00:17:59.889 ] 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.889 "name": "Existed_Raid", 00:17:59.889 "uuid": "c82a3fef-5e49-4223-ae2e-c740da3e2ee3", 00:17:59.889 "strip_size_kb": 0, 00:17:59.889 "state": "configuring", 00:17:59.889 "raid_level": "raid1", 00:17:59.889 "superblock": true, 00:17:59.889 "num_base_bdevs": 2, 00:17:59.889 "num_base_bdevs_discovered": 1, 00:17:59.889 "num_base_bdevs_operational": 2, 00:17:59.889 "base_bdevs_list": [ 00:17:59.889 { 00:17:59.889 "name": "BaseBdev1", 00:17:59.889 "uuid": "27378fe3-f2d6-4903-8d5b-e53444c85395", 00:17:59.889 "is_configured": true, 00:17:59.889 "data_offset": 256, 00:17:59.889 "data_size": 7936 00:17:59.889 }, 00:17:59.889 { 00:17:59.889 "name": "BaseBdev2", 00:17:59.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.889 "is_configured": false, 00:17:59.889 "data_offset": 0, 00:17:59.889 "data_size": 0 00:17:59.889 } 00:17:59.889 ] 00:17:59.889 }' 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.889 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.457 [2024-12-06 23:52:11.744713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:00.457 [2024-12-06 23:52:11.744756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.457 [2024-12-06 23:52:11.756740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:00.457 [2024-12-06 23:52:11.758498] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:00.457 [2024-12-06 23:52:11.758536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.457 "name": "Existed_Raid", 00:18:00.457 "uuid": "3c3dbd4c-ee16-4a4f-bc4e-68330651d56c", 00:18:00.457 "strip_size_kb": 0, 00:18:00.457 "state": "configuring", 00:18:00.457 "raid_level": "raid1", 00:18:00.457 "superblock": true, 00:18:00.457 "num_base_bdevs": 2, 00:18:00.457 "num_base_bdevs_discovered": 1, 00:18:00.457 "num_base_bdevs_operational": 2, 00:18:00.457 "base_bdevs_list": [ 00:18:00.457 { 00:18:00.457 "name": "BaseBdev1", 00:18:00.457 "uuid": "27378fe3-f2d6-4903-8d5b-e53444c85395", 00:18:00.457 "is_configured": true, 00:18:00.457 "data_offset": 256, 00:18:00.457 "data_size": 7936 00:18:00.457 }, 00:18:00.457 { 00:18:00.457 "name": "BaseBdev2", 00:18:00.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.457 "is_configured": false, 00:18:00.457 "data_offset": 0, 00:18:00.457 "data_size": 0 00:18:00.457 } 00:18:00.457 ] 00:18:00.457 }' 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.457 23:52:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.717 [2024-12-06 23:52:12.273181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:00.717 [2024-12-06 23:52:12.273522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:00.717 [2024-12-06 23:52:12.273578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:00.717 [2024-12-06 23:52:12.273700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:00.717 [2024-12-06 23:52:12.273811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:00.717 [2024-12-06 23:52:12.273852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:00.717 [2024-12-06 23:52:12.273949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.717 BaseBdev2 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.717 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.978 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.978 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:00.978 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.978 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.978 [ 00:18:00.978 { 00:18:00.978 "name": "BaseBdev2", 00:18:00.978 "aliases": [ 00:18:00.978 "c4324000-16cb-4a62-9cd6-8c57e6ab983a" 00:18:00.978 ], 00:18:00.978 "product_name": "Malloc disk", 00:18:00.978 "block_size": 4128, 00:18:00.978 "num_blocks": 8192, 00:18:00.978 "uuid": "c4324000-16cb-4a62-9cd6-8c57e6ab983a", 00:18:00.978 "md_size": 32, 00:18:00.978 "md_interleave": true, 00:18:00.978 "dif_type": 0, 00:18:00.978 "assigned_rate_limits": { 00:18:00.978 "rw_ios_per_sec": 0, 00:18:00.978 "rw_mbytes_per_sec": 0, 00:18:00.978 "r_mbytes_per_sec": 0, 00:18:00.978 "w_mbytes_per_sec": 0 00:18:00.978 }, 00:18:00.978 "claimed": true, 00:18:00.978 "claim_type": "exclusive_write", 00:18:00.978 "zoned": false, 00:18:00.978 "supported_io_types": { 00:18:00.978 "read": true, 00:18:00.978 "write": true, 00:18:00.978 "unmap": true, 00:18:00.978 "flush": true, 00:18:00.978 "reset": true, 00:18:00.978 "nvme_admin": false, 00:18:00.978 "nvme_io": false, 00:18:00.978 "nvme_io_md": false, 00:18:00.978 "write_zeroes": true, 00:18:00.978 "zcopy": true, 00:18:00.978 "get_zone_info": false, 00:18:00.978 "zone_management": false, 00:18:00.978 "zone_append": false, 00:18:00.978 "compare": false, 00:18:00.978 "compare_and_write": false, 00:18:00.978 "abort": true, 00:18:00.978 "seek_hole": false, 00:18:00.978 "seek_data": false, 00:18:00.978 "copy": true, 00:18:00.978 "nvme_iov_md": false 00:18:00.978 }, 00:18:00.978 "memory_domains": [ 00:18:00.978 { 00:18:00.978 "dma_device_id": "system", 00:18:00.978 "dma_device_type": 1 00:18:00.978 }, 00:18:00.978 { 00:18:00.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.978 "dma_device_type": 2 00:18:00.978 } 00:18:00.978 ], 00:18:00.978 "driver_specific": {} 00:18:00.978 } 00:18:00.978 ] 00:18:00.978 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.978 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:00.978 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:00.978 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:00.978 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.979 "name": "Existed_Raid", 00:18:00.979 "uuid": "3c3dbd4c-ee16-4a4f-bc4e-68330651d56c", 00:18:00.979 "strip_size_kb": 0, 00:18:00.979 "state": "online", 00:18:00.979 "raid_level": "raid1", 00:18:00.979 "superblock": true, 00:18:00.979 "num_base_bdevs": 2, 00:18:00.979 "num_base_bdevs_discovered": 2, 00:18:00.979 "num_base_bdevs_operational": 2, 00:18:00.979 "base_bdevs_list": [ 00:18:00.979 { 00:18:00.979 "name": "BaseBdev1", 00:18:00.979 "uuid": "27378fe3-f2d6-4903-8d5b-e53444c85395", 00:18:00.979 "is_configured": true, 00:18:00.979 "data_offset": 256, 00:18:00.979 "data_size": 7936 00:18:00.979 }, 00:18:00.979 { 00:18:00.979 "name": "BaseBdev2", 00:18:00.979 "uuid": "c4324000-16cb-4a62-9cd6-8c57e6ab983a", 00:18:00.979 "is_configured": true, 00:18:00.979 "data_offset": 256, 00:18:00.979 "data_size": 7936 00:18:00.979 } 00:18:00.979 ] 00:18:00.979 }' 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.979 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.238 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:01.238 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:01.238 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.238 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.238 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.238 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.238 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:01.238 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.238 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.238 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.238 [2024-12-06 23:52:12.796536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.499 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.499 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.499 "name": "Existed_Raid", 00:18:01.499 "aliases": [ 00:18:01.499 "3c3dbd4c-ee16-4a4f-bc4e-68330651d56c" 00:18:01.499 ], 00:18:01.499 "product_name": "Raid Volume", 00:18:01.499 "block_size": 4128, 00:18:01.499 "num_blocks": 7936, 00:18:01.499 "uuid": "3c3dbd4c-ee16-4a4f-bc4e-68330651d56c", 00:18:01.499 "md_size": 32, 00:18:01.499 "md_interleave": true, 00:18:01.499 "dif_type": 0, 00:18:01.499 "assigned_rate_limits": { 00:18:01.499 "rw_ios_per_sec": 0, 00:18:01.499 "rw_mbytes_per_sec": 0, 00:18:01.499 "r_mbytes_per_sec": 0, 00:18:01.499 "w_mbytes_per_sec": 0 00:18:01.499 }, 00:18:01.499 "claimed": false, 00:18:01.499 "zoned": false, 00:18:01.499 "supported_io_types": { 00:18:01.499 "read": true, 00:18:01.499 "write": true, 00:18:01.499 "unmap": false, 00:18:01.499 "flush": false, 00:18:01.500 "reset": true, 00:18:01.500 "nvme_admin": false, 00:18:01.500 "nvme_io": false, 00:18:01.500 "nvme_io_md": false, 00:18:01.500 "write_zeroes": true, 00:18:01.500 "zcopy": false, 00:18:01.500 "get_zone_info": false, 00:18:01.500 "zone_management": false, 00:18:01.500 "zone_append": false, 00:18:01.500 "compare": false, 00:18:01.500 "compare_and_write": false, 00:18:01.500 "abort": false, 00:18:01.500 "seek_hole": false, 00:18:01.500 "seek_data": false, 00:18:01.500 "copy": false, 00:18:01.500 "nvme_iov_md": false 00:18:01.500 }, 00:18:01.500 "memory_domains": [ 00:18:01.500 { 00:18:01.500 "dma_device_id": "system", 00:18:01.500 "dma_device_type": 1 00:18:01.500 }, 00:18:01.500 { 00:18:01.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.500 "dma_device_type": 2 00:18:01.500 }, 00:18:01.500 { 00:18:01.500 "dma_device_id": "system", 00:18:01.500 "dma_device_type": 1 00:18:01.500 }, 00:18:01.500 { 00:18:01.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.500 "dma_device_type": 2 00:18:01.500 } 00:18:01.500 ], 00:18:01.500 "driver_specific": { 00:18:01.500 "raid": { 00:18:01.500 "uuid": "3c3dbd4c-ee16-4a4f-bc4e-68330651d56c", 00:18:01.500 "strip_size_kb": 0, 00:18:01.500 "state": "online", 00:18:01.500 "raid_level": "raid1", 00:18:01.500 "superblock": true, 00:18:01.500 "num_base_bdevs": 2, 00:18:01.500 "num_base_bdevs_discovered": 2, 00:18:01.500 "num_base_bdevs_operational": 2, 00:18:01.500 "base_bdevs_list": [ 00:18:01.500 { 00:18:01.500 "name": "BaseBdev1", 00:18:01.500 "uuid": "27378fe3-f2d6-4903-8d5b-e53444c85395", 00:18:01.500 "is_configured": true, 00:18:01.500 "data_offset": 256, 00:18:01.500 "data_size": 7936 00:18:01.500 }, 00:18:01.500 { 00:18:01.500 "name": "BaseBdev2", 00:18:01.500 "uuid": "c4324000-16cb-4a62-9cd6-8c57e6ab983a", 00:18:01.500 "is_configured": true, 00:18:01.500 "data_offset": 256, 00:18:01.500 "data_size": 7936 00:18:01.500 } 00:18:01.500 ] 00:18:01.500 } 00:18:01.500 } 00:18:01.500 }' 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:01.500 BaseBdev2' 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.500 23:52:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.500 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:01.500 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:01.500 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:01.500 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.500 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.500 [2024-12-06 23:52:13.036097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.761 "name": "Existed_Raid", 00:18:01.761 "uuid": "3c3dbd4c-ee16-4a4f-bc4e-68330651d56c", 00:18:01.761 "strip_size_kb": 0, 00:18:01.761 "state": "online", 00:18:01.761 "raid_level": "raid1", 00:18:01.761 "superblock": true, 00:18:01.761 "num_base_bdevs": 2, 00:18:01.761 "num_base_bdevs_discovered": 1, 00:18:01.761 "num_base_bdevs_operational": 1, 00:18:01.761 "base_bdevs_list": [ 00:18:01.761 { 00:18:01.761 "name": null, 00:18:01.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.761 "is_configured": false, 00:18:01.761 "data_offset": 0, 00:18:01.761 "data_size": 7936 00:18:01.761 }, 00:18:01.761 { 00:18:01.761 "name": "BaseBdev2", 00:18:01.761 "uuid": "c4324000-16cb-4a62-9cd6-8c57e6ab983a", 00:18:01.761 "is_configured": true, 00:18:01.761 "data_offset": 256, 00:18:01.761 "data_size": 7936 00:18:01.761 } 00:18:01.761 ] 00:18:01.761 }' 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.761 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.021 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:02.021 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:02.021 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.021 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.021 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.021 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:02.021 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.282 [2024-12-06 23:52:13.598072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:02.282 [2024-12-06 23:52:13.598183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.282 [2024-12-06 23:52:13.687679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.282 [2024-12-06 23:52:13.687810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.282 [2024-12-06 23:52:13.687829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88363 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88363 ']' 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88363 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88363 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88363' 00:18:02.282 killing process with pid 88363 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88363 00:18:02.282 [2024-12-06 23:52:13.781179] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:02.282 23:52:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88363 00:18:02.282 [2024-12-06 23:52:13.797587] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.666 23:52:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:03.666 00:18:03.666 real 0m5.048s 00:18:03.666 user 0m7.326s 00:18:03.666 sys 0m0.884s 00:18:03.666 23:52:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.666 ************************************ 00:18:03.666 END TEST raid_state_function_test_sb_md_interleaved 00:18:03.666 ************************************ 00:18:03.666 23:52:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.666 23:52:14 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:03.666 23:52:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:03.666 23:52:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.666 23:52:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.666 ************************************ 00:18:03.667 START TEST raid_superblock_test_md_interleaved 00:18:03.667 ************************************ 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88610 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88610 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88610 ']' 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.667 23:52:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.667 [2024-12-06 23:52:15.030146] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:18:03.667 [2024-12-06 23:52:15.030338] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88610 ] 00:18:03.667 [2024-12-06 23:52:15.203452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.927 [2024-12-06 23:52:15.311847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.186 [2024-12-06 23:52:15.503899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.186 [2024-12-06 23:52:15.503954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.445 malloc1 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.445 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.445 [2024-12-06 23:52:15.897019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:04.445 [2024-12-06 23:52:15.897158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.445 [2024-12-06 23:52:15.897196] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:04.445 [2024-12-06 23:52:15.897224] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.445 [2024-12-06 23:52:15.898985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.446 [2024-12-06 23:52:15.899070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:04.446 pt1 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.446 malloc2 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.446 [2024-12-06 23:52:15.954162] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:04.446 [2024-12-06 23:52:15.954216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.446 [2024-12-06 23:52:15.954236] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:04.446 [2024-12-06 23:52:15.954244] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.446 [2024-12-06 23:52:15.955957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.446 [2024-12-06 23:52:15.956058] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:04.446 pt2 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.446 [2024-12-06 23:52:15.966175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:04.446 [2024-12-06 23:52:15.967870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.446 [2024-12-06 23:52:15.968064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:04.446 [2024-12-06 23:52:15.968078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:04.446 [2024-12-06 23:52:15.968151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:04.446 [2024-12-06 23:52:15.968219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:04.446 [2024-12-06 23:52:15.968230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:04.446 [2024-12-06 23:52:15.968298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.446 23:52:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.726 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.726 "name": "raid_bdev1", 00:18:04.726 "uuid": "b11f3149-941e-4b5f-94f7-5b2bf5b6cb38", 00:18:04.726 "strip_size_kb": 0, 00:18:04.726 "state": "online", 00:18:04.726 "raid_level": "raid1", 00:18:04.726 "superblock": true, 00:18:04.726 "num_base_bdevs": 2, 00:18:04.726 "num_base_bdevs_discovered": 2, 00:18:04.726 "num_base_bdevs_operational": 2, 00:18:04.726 "base_bdevs_list": [ 00:18:04.726 { 00:18:04.726 "name": "pt1", 00:18:04.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.726 "is_configured": true, 00:18:04.726 "data_offset": 256, 00:18:04.726 "data_size": 7936 00:18:04.726 }, 00:18:04.726 { 00:18:04.726 "name": "pt2", 00:18:04.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.726 "is_configured": true, 00:18:04.726 "data_offset": 256, 00:18:04.726 "data_size": 7936 00:18:04.726 } 00:18:04.726 ] 00:18:04.726 }' 00:18:04.726 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.726 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.986 [2024-12-06 23:52:16.445546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:04.986 "name": "raid_bdev1", 00:18:04.986 "aliases": [ 00:18:04.986 "b11f3149-941e-4b5f-94f7-5b2bf5b6cb38" 00:18:04.986 ], 00:18:04.986 "product_name": "Raid Volume", 00:18:04.986 "block_size": 4128, 00:18:04.986 "num_blocks": 7936, 00:18:04.986 "uuid": "b11f3149-941e-4b5f-94f7-5b2bf5b6cb38", 00:18:04.986 "md_size": 32, 00:18:04.986 "md_interleave": true, 00:18:04.986 "dif_type": 0, 00:18:04.986 "assigned_rate_limits": { 00:18:04.986 "rw_ios_per_sec": 0, 00:18:04.986 "rw_mbytes_per_sec": 0, 00:18:04.986 "r_mbytes_per_sec": 0, 00:18:04.986 "w_mbytes_per_sec": 0 00:18:04.986 }, 00:18:04.986 "claimed": false, 00:18:04.986 "zoned": false, 00:18:04.986 "supported_io_types": { 00:18:04.986 "read": true, 00:18:04.986 "write": true, 00:18:04.986 "unmap": false, 00:18:04.986 "flush": false, 00:18:04.986 "reset": true, 00:18:04.986 "nvme_admin": false, 00:18:04.986 "nvme_io": false, 00:18:04.986 "nvme_io_md": false, 00:18:04.986 "write_zeroes": true, 00:18:04.986 "zcopy": false, 00:18:04.986 "get_zone_info": false, 00:18:04.986 "zone_management": false, 00:18:04.986 "zone_append": false, 00:18:04.986 "compare": false, 00:18:04.986 "compare_and_write": false, 00:18:04.986 "abort": false, 00:18:04.986 "seek_hole": false, 00:18:04.986 "seek_data": false, 00:18:04.986 "copy": false, 00:18:04.986 "nvme_iov_md": false 00:18:04.986 }, 00:18:04.986 "memory_domains": [ 00:18:04.986 { 00:18:04.986 "dma_device_id": "system", 00:18:04.986 "dma_device_type": 1 00:18:04.986 }, 00:18:04.986 { 00:18:04.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.986 "dma_device_type": 2 00:18:04.986 }, 00:18:04.986 { 00:18:04.986 "dma_device_id": "system", 00:18:04.986 "dma_device_type": 1 00:18:04.986 }, 00:18:04.986 { 00:18:04.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.986 "dma_device_type": 2 00:18:04.986 } 00:18:04.986 ], 00:18:04.986 "driver_specific": { 00:18:04.986 "raid": { 00:18:04.986 "uuid": "b11f3149-941e-4b5f-94f7-5b2bf5b6cb38", 00:18:04.986 "strip_size_kb": 0, 00:18:04.986 "state": "online", 00:18:04.986 "raid_level": "raid1", 00:18:04.986 "superblock": true, 00:18:04.986 "num_base_bdevs": 2, 00:18:04.986 "num_base_bdevs_discovered": 2, 00:18:04.986 "num_base_bdevs_operational": 2, 00:18:04.986 "base_bdevs_list": [ 00:18:04.986 { 00:18:04.986 "name": "pt1", 00:18:04.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.986 "is_configured": true, 00:18:04.986 "data_offset": 256, 00:18:04.986 "data_size": 7936 00:18:04.986 }, 00:18:04.986 { 00:18:04.986 "name": "pt2", 00:18:04.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.986 "is_configured": true, 00:18:04.986 "data_offset": 256, 00:18:04.986 "data_size": 7936 00:18:04.986 } 00:18:04.986 ] 00:18:04.986 } 00:18:04.986 } 00:18:04.986 }' 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:04.986 pt2' 00:18:04.986 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.247 [2024-12-06 23:52:16.665147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b11f3149-941e-4b5f-94f7-5b2bf5b6cb38 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z b11f3149-941e-4b5f-94f7-5b2bf5b6cb38 ']' 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.247 [2024-12-06 23:52:16.708836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.247 [2024-12-06 23:52:16.708905] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.247 [2024-12-06 23:52:16.708973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.247 [2024-12-06 23:52:16.709034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.247 [2024-12-06 23:52:16.709045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.247 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.507 [2024-12-06 23:52:16.844618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:05.507 [2024-12-06 23:52:16.846430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:05.507 [2024-12-06 23:52:16.846493] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:05.507 [2024-12-06 23:52:16.846540] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:05.507 [2024-12-06 23:52:16.846552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.507 [2024-12-06 23:52:16.846561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:05.507 request: 00:18:05.507 { 00:18:05.507 "name": "raid_bdev1", 00:18:05.507 "raid_level": "raid1", 00:18:05.507 "base_bdevs": [ 00:18:05.507 "malloc1", 00:18:05.507 "malloc2" 00:18:05.507 ], 00:18:05.507 "superblock": false, 00:18:05.507 "method": "bdev_raid_create", 00:18:05.507 "req_id": 1 00:18:05.507 } 00:18:05.507 Got JSON-RPC error response 00:18:05.507 response: 00:18:05.507 { 00:18:05.507 "code": -17, 00:18:05.507 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:05.507 } 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.507 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.507 [2024-12-06 23:52:16.912489] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:05.507 [2024-12-06 23:52:16.912583] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.507 [2024-12-06 23:52:16.912613] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:05.507 [2024-12-06 23:52:16.912641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.507 [2024-12-06 23:52:16.914488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.507 [2024-12-06 23:52:16.914558] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:05.508 [2024-12-06 23:52:16.914616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:05.508 [2024-12-06 23:52:16.914709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:05.508 pt1 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.508 "name": "raid_bdev1", 00:18:05.508 "uuid": "b11f3149-941e-4b5f-94f7-5b2bf5b6cb38", 00:18:05.508 "strip_size_kb": 0, 00:18:05.508 "state": "configuring", 00:18:05.508 "raid_level": "raid1", 00:18:05.508 "superblock": true, 00:18:05.508 "num_base_bdevs": 2, 00:18:05.508 "num_base_bdevs_discovered": 1, 00:18:05.508 "num_base_bdevs_operational": 2, 00:18:05.508 "base_bdevs_list": [ 00:18:05.508 { 00:18:05.508 "name": "pt1", 00:18:05.508 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.508 "is_configured": true, 00:18:05.508 "data_offset": 256, 00:18:05.508 "data_size": 7936 00:18:05.508 }, 00:18:05.508 { 00:18:05.508 "name": null, 00:18:05.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.508 "is_configured": false, 00:18:05.508 "data_offset": 256, 00:18:05.508 "data_size": 7936 00:18:05.508 } 00:18:05.508 ] 00:18:05.508 }' 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.508 23:52:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.078 [2024-12-06 23:52:17.399830] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:06.078 [2024-12-06 23:52:17.399882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.078 [2024-12-06 23:52:17.399898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:06.078 [2024-12-06 23:52:17.399907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.078 [2024-12-06 23:52:17.400011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.078 [2024-12-06 23:52:17.400026] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:06.078 [2024-12-06 23:52:17.400059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:06.078 [2024-12-06 23:52:17.400075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:06.078 [2024-12-06 23:52:17.400139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:06.078 [2024-12-06 23:52:17.400148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:06.078 [2024-12-06 23:52:17.400207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:06.078 [2024-12-06 23:52:17.400261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:06.078 [2024-12-06 23:52:17.400267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:06.078 [2024-12-06 23:52:17.400314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.078 pt2 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.078 "name": "raid_bdev1", 00:18:06.078 "uuid": "b11f3149-941e-4b5f-94f7-5b2bf5b6cb38", 00:18:06.078 "strip_size_kb": 0, 00:18:06.078 "state": "online", 00:18:06.078 "raid_level": "raid1", 00:18:06.078 "superblock": true, 00:18:06.078 "num_base_bdevs": 2, 00:18:06.078 "num_base_bdevs_discovered": 2, 00:18:06.078 "num_base_bdevs_operational": 2, 00:18:06.078 "base_bdevs_list": [ 00:18:06.078 { 00:18:06.078 "name": "pt1", 00:18:06.078 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:06.078 "is_configured": true, 00:18:06.078 "data_offset": 256, 00:18:06.078 "data_size": 7936 00:18:06.078 }, 00:18:06.078 { 00:18:06.078 "name": "pt2", 00:18:06.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.078 "is_configured": true, 00:18:06.078 "data_offset": 256, 00:18:06.078 "data_size": 7936 00:18:06.078 } 00:18:06.078 ] 00:18:06.078 }' 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.078 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.339 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:06.339 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:06.339 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:06.339 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:06.339 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:06.339 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:06.339 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.339 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:06.339 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.339 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.339 [2024-12-06 23:52:17.867372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.339 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.599 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:06.599 "name": "raid_bdev1", 00:18:06.599 "aliases": [ 00:18:06.599 "b11f3149-941e-4b5f-94f7-5b2bf5b6cb38" 00:18:06.599 ], 00:18:06.599 "product_name": "Raid Volume", 00:18:06.599 "block_size": 4128, 00:18:06.599 "num_blocks": 7936, 00:18:06.599 "uuid": "b11f3149-941e-4b5f-94f7-5b2bf5b6cb38", 00:18:06.599 "md_size": 32, 00:18:06.599 "md_interleave": true, 00:18:06.599 "dif_type": 0, 00:18:06.599 "assigned_rate_limits": { 00:18:06.599 "rw_ios_per_sec": 0, 00:18:06.599 "rw_mbytes_per_sec": 0, 00:18:06.599 "r_mbytes_per_sec": 0, 00:18:06.599 "w_mbytes_per_sec": 0 00:18:06.599 }, 00:18:06.599 "claimed": false, 00:18:06.599 "zoned": false, 00:18:06.599 "supported_io_types": { 00:18:06.599 "read": true, 00:18:06.599 "write": true, 00:18:06.599 "unmap": false, 00:18:06.599 "flush": false, 00:18:06.599 "reset": true, 00:18:06.599 "nvme_admin": false, 00:18:06.599 "nvme_io": false, 00:18:06.599 "nvme_io_md": false, 00:18:06.599 "write_zeroes": true, 00:18:06.599 "zcopy": false, 00:18:06.599 "get_zone_info": false, 00:18:06.599 "zone_management": false, 00:18:06.599 "zone_append": false, 00:18:06.599 "compare": false, 00:18:06.599 "compare_and_write": false, 00:18:06.599 "abort": false, 00:18:06.599 "seek_hole": false, 00:18:06.599 "seek_data": false, 00:18:06.599 "copy": false, 00:18:06.599 "nvme_iov_md": false 00:18:06.599 }, 00:18:06.599 "memory_domains": [ 00:18:06.599 { 00:18:06.599 "dma_device_id": "system", 00:18:06.599 "dma_device_type": 1 00:18:06.599 }, 00:18:06.599 { 00:18:06.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.599 "dma_device_type": 2 00:18:06.599 }, 00:18:06.599 { 00:18:06.599 "dma_device_id": "system", 00:18:06.599 "dma_device_type": 1 00:18:06.599 }, 00:18:06.599 { 00:18:06.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.599 "dma_device_type": 2 00:18:06.599 } 00:18:06.599 ], 00:18:06.599 "driver_specific": { 00:18:06.599 "raid": { 00:18:06.599 "uuid": "b11f3149-941e-4b5f-94f7-5b2bf5b6cb38", 00:18:06.599 "strip_size_kb": 0, 00:18:06.599 "state": "online", 00:18:06.599 "raid_level": "raid1", 00:18:06.599 "superblock": true, 00:18:06.599 "num_base_bdevs": 2, 00:18:06.599 "num_base_bdevs_discovered": 2, 00:18:06.599 "num_base_bdevs_operational": 2, 00:18:06.599 "base_bdevs_list": [ 00:18:06.599 { 00:18:06.599 "name": "pt1", 00:18:06.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:06.599 "is_configured": true, 00:18:06.599 "data_offset": 256, 00:18:06.599 "data_size": 7936 00:18:06.599 }, 00:18:06.599 { 00:18:06.599 "name": "pt2", 00:18:06.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.599 "is_configured": true, 00:18:06.599 "data_offset": 256, 00:18:06.599 "data_size": 7936 00:18:06.599 } 00:18:06.599 ] 00:18:06.599 } 00:18:06.599 } 00:18:06.599 }' 00:18:06.599 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:06.599 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:06.599 pt2' 00:18:06.599 23:52:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.599 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:06.599 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.599 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:06.599 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:06.600 [2024-12-06 23:52:18.114984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.600 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' b11f3149-941e-4b5f-94f7-5b2bf5b6cb38 '!=' b11f3149-941e-4b5f-94f7-5b2bf5b6cb38 ']' 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.860 [2024-12-06 23:52:18.166693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.860 "name": "raid_bdev1", 00:18:06.860 "uuid": "b11f3149-941e-4b5f-94f7-5b2bf5b6cb38", 00:18:06.860 "strip_size_kb": 0, 00:18:06.860 "state": "online", 00:18:06.860 "raid_level": "raid1", 00:18:06.860 "superblock": true, 00:18:06.860 "num_base_bdevs": 2, 00:18:06.860 "num_base_bdevs_discovered": 1, 00:18:06.860 "num_base_bdevs_operational": 1, 00:18:06.860 "base_bdevs_list": [ 00:18:06.860 { 00:18:06.860 "name": null, 00:18:06.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.860 "is_configured": false, 00:18:06.860 "data_offset": 0, 00:18:06.860 "data_size": 7936 00:18:06.860 }, 00:18:06.860 { 00:18:06.860 "name": "pt2", 00:18:06.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.860 "is_configured": true, 00:18:06.860 "data_offset": 256, 00:18:06.860 "data_size": 7936 00:18:06.860 } 00:18:06.860 ] 00:18:06.860 }' 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.860 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.121 [2024-12-06 23:52:18.597873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.121 [2024-12-06 23:52:18.597943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.121 [2024-12-06 23:52:18.598023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.121 [2024-12-06 23:52:18.598072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.121 [2024-12-06 23:52:18.598107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.121 [2024-12-06 23:52:18.673776] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:07.121 [2024-12-06 23:52:18.673818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.121 [2024-12-06 23:52:18.673831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:07.121 [2024-12-06 23:52:18.673840] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.121 [2024-12-06 23:52:18.675722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.121 [2024-12-06 23:52:18.675758] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:07.121 [2024-12-06 23:52:18.675798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:07.121 [2024-12-06 23:52:18.675847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:07.121 [2024-12-06 23:52:18.675900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:07.121 [2024-12-06 23:52:18.675911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:07.121 [2024-12-06 23:52:18.675996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:07.121 [2024-12-06 23:52:18.676054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:07.121 [2024-12-06 23:52:18.676061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:07.121 [2024-12-06 23:52:18.676111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.121 pt2 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.121 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.380 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.380 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.380 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.380 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.380 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.380 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.380 "name": "raid_bdev1", 00:18:07.380 "uuid": "b11f3149-941e-4b5f-94f7-5b2bf5b6cb38", 00:18:07.380 "strip_size_kb": 0, 00:18:07.380 "state": "online", 00:18:07.380 "raid_level": "raid1", 00:18:07.380 "superblock": true, 00:18:07.380 "num_base_bdevs": 2, 00:18:07.380 "num_base_bdevs_discovered": 1, 00:18:07.380 "num_base_bdevs_operational": 1, 00:18:07.380 "base_bdevs_list": [ 00:18:07.380 { 00:18:07.380 "name": null, 00:18:07.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.380 "is_configured": false, 00:18:07.380 "data_offset": 256, 00:18:07.380 "data_size": 7936 00:18:07.380 }, 00:18:07.380 { 00:18:07.380 "name": "pt2", 00:18:07.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.380 "is_configured": true, 00:18:07.380 "data_offset": 256, 00:18:07.380 "data_size": 7936 00:18:07.380 } 00:18:07.380 ] 00:18:07.380 }' 00:18:07.380 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.380 23:52:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.640 [2024-12-06 23:52:19.132935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.640 [2024-12-06 23:52:19.133010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.640 [2024-12-06 23:52:19.133086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.640 [2024-12-06 23:52:19.133135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.640 [2024-12-06 23:52:19.133165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.640 [2024-12-06 23:52:19.184880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:07.640 [2024-12-06 23:52:19.184981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.640 [2024-12-06 23:52:19.185011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:07.640 [2024-12-06 23:52:19.185036] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.640 [2024-12-06 23:52:19.186793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.640 [2024-12-06 23:52:19.186856] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:07.640 [2024-12-06 23:52:19.186912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:07.640 [2024-12-06 23:52:19.186986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:07.640 [2024-12-06 23:52:19.187083] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:07.640 [2024-12-06 23:52:19.187144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.640 [2024-12-06 23:52:19.187184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:07.640 [2024-12-06 23:52:19.187299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:07.640 [2024-12-06 23:52:19.187396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:07.640 [2024-12-06 23:52:19.187436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:07.640 [2024-12-06 23:52:19.187513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:07.640 [2024-12-06 23:52:19.187598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:07.640 [2024-12-06 23:52:19.187631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:07.640 [2024-12-06 23:52:19.187749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.640 pt1 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.640 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.641 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.641 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.641 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.641 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.641 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.641 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.641 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.641 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.900 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.900 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.900 "name": "raid_bdev1", 00:18:07.900 "uuid": "b11f3149-941e-4b5f-94f7-5b2bf5b6cb38", 00:18:07.900 "strip_size_kb": 0, 00:18:07.900 "state": "online", 00:18:07.900 "raid_level": "raid1", 00:18:07.900 "superblock": true, 00:18:07.900 "num_base_bdevs": 2, 00:18:07.901 "num_base_bdevs_discovered": 1, 00:18:07.901 "num_base_bdevs_operational": 1, 00:18:07.901 "base_bdevs_list": [ 00:18:07.901 { 00:18:07.901 "name": null, 00:18:07.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.901 "is_configured": false, 00:18:07.901 "data_offset": 256, 00:18:07.901 "data_size": 7936 00:18:07.901 }, 00:18:07.901 { 00:18:07.901 "name": "pt2", 00:18:07.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.901 "is_configured": true, 00:18:07.901 "data_offset": 256, 00:18:07.901 "data_size": 7936 00:18:07.901 } 00:18:07.901 ] 00:18:07.901 }' 00:18:07.901 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.901 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.161 [2024-12-06 23:52:19.672280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' b11f3149-941e-4b5f-94f7-5b2bf5b6cb38 '!=' b11f3149-941e-4b5f-94f7-5b2bf5b6cb38 ']' 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88610 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88610 ']' 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88610 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.161 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88610 00:18:08.420 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.421 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.421 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88610' 00:18:08.421 killing process with pid 88610 00:18:08.421 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88610 00:18:08.421 [2024-12-06 23:52:19.749857] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:08.421 [2024-12-06 23:52:19.749919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.421 [2024-12-06 23:52:19.749952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.421 [2024-12-06 23:52:19.749964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:08.421 23:52:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88610 00:18:08.421 [2024-12-06 23:52:19.944788] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:09.804 23:52:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:09.804 00:18:09.804 real 0m6.048s 00:18:09.804 user 0m9.179s 00:18:09.804 sys 0m1.139s 00:18:09.804 ************************************ 00:18:09.804 END TEST raid_superblock_test_md_interleaved 00:18:09.804 ************************************ 00:18:09.804 23:52:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.804 23:52:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.804 23:52:21 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:09.804 23:52:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:09.804 23:52:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.804 23:52:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:09.804 ************************************ 00:18:09.804 START TEST raid_rebuild_test_sb_md_interleaved 00:18:09.804 ************************************ 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88937 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88937 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88937 ']' 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.804 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.804 [2024-12-06 23:52:21.155626] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:18:09.804 [2024-12-06 23:52:21.155831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88937 ] 00:18:09.805 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:09.805 Zero copy mechanism will not be used. 00:18:09.805 [2024-12-06 23:52:21.326313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.065 [2024-12-06 23:52:21.429295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.065 [2024-12-06 23:52:21.622779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.065 [2024-12-06 23:52:21.622897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.636 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.636 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:10.636 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.636 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:10.636 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.636 23:52:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.636 BaseBdev1_malloc 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.636 [2024-12-06 23:52:22.012870] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:10.636 [2024-12-06 23:52:22.013022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.636 [2024-12-06 23:52:22.013060] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:10.636 [2024-12-06 23:52:22.013089] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.636 [2024-12-06 23:52:22.014918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.636 [2024-12-06 23:52:22.014991] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:10.636 BaseBdev1 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.636 BaseBdev2_malloc 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.636 [2024-12-06 23:52:22.065291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:10.636 [2024-12-06 23:52:22.065348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.636 [2024-12-06 23:52:22.065367] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:10.636 [2024-12-06 23:52:22.065378] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.636 [2024-12-06 23:52:22.067136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.636 [2024-12-06 23:52:22.067174] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:10.636 BaseBdev2 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.636 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.636 spare_malloc 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.637 spare_delay 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.637 [2024-12-06 23:52:22.163363] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:10.637 [2024-12-06 23:52:22.163422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.637 [2024-12-06 23:52:22.163441] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:10.637 [2024-12-06 23:52:22.163452] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.637 [2024-12-06 23:52:22.165255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.637 [2024-12-06 23:52:22.165297] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:10.637 spare 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.637 [2024-12-06 23:52:22.175382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.637 [2024-12-06 23:52:22.177148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.637 [2024-12-06 23:52:22.177347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:10.637 [2024-12-06 23:52:22.177363] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:10.637 [2024-12-06 23:52:22.177440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:10.637 [2024-12-06 23:52:22.177503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:10.637 [2024-12-06 23:52:22.177510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:10.637 [2024-12-06 23:52:22.177570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.637 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.897 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.897 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.897 "name": "raid_bdev1", 00:18:10.897 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:10.897 "strip_size_kb": 0, 00:18:10.897 "state": "online", 00:18:10.897 "raid_level": "raid1", 00:18:10.897 "superblock": true, 00:18:10.897 "num_base_bdevs": 2, 00:18:10.897 "num_base_bdevs_discovered": 2, 00:18:10.897 "num_base_bdevs_operational": 2, 00:18:10.897 "base_bdevs_list": [ 00:18:10.897 { 00:18:10.897 "name": "BaseBdev1", 00:18:10.897 "uuid": "142fbe10-7166-5a14-99e7-026479e1bea2", 00:18:10.897 "is_configured": true, 00:18:10.897 "data_offset": 256, 00:18:10.897 "data_size": 7936 00:18:10.897 }, 00:18:10.897 { 00:18:10.897 "name": "BaseBdev2", 00:18:10.897 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:10.897 "is_configured": true, 00:18:10.897 "data_offset": 256, 00:18:10.897 "data_size": 7936 00:18:10.897 } 00:18:10.897 ] 00:18:10.897 }' 00:18:10.897 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.897 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.157 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:11.157 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:11.157 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.157 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.157 [2024-12-06 23:52:22.626841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.157 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.157 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:11.157 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.158 [2024-12-06 23:52:22.694449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.158 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.418 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.418 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.418 "name": "raid_bdev1", 00:18:11.418 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:11.418 "strip_size_kb": 0, 00:18:11.418 "state": "online", 00:18:11.418 "raid_level": "raid1", 00:18:11.418 "superblock": true, 00:18:11.418 "num_base_bdevs": 2, 00:18:11.418 "num_base_bdevs_discovered": 1, 00:18:11.418 "num_base_bdevs_operational": 1, 00:18:11.418 "base_bdevs_list": [ 00:18:11.418 { 00:18:11.418 "name": null, 00:18:11.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.418 "is_configured": false, 00:18:11.418 "data_offset": 0, 00:18:11.418 "data_size": 7936 00:18:11.418 }, 00:18:11.418 { 00:18:11.418 "name": "BaseBdev2", 00:18:11.418 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:11.418 "is_configured": true, 00:18:11.418 "data_offset": 256, 00:18:11.418 "data_size": 7936 00:18:11.418 } 00:18:11.418 ] 00:18:11.418 }' 00:18:11.418 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.419 23:52:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.679 23:52:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:11.679 23:52:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.679 23:52:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.679 [2024-12-06 23:52:23.149765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.679 [2024-12-06 23:52:23.165599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:11.679 23:52:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.679 23:52:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:11.679 [2024-12-06 23:52:23.167407] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:12.620 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.620 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.620 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.620 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.620 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.620 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.620 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.620 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.620 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.880 "name": "raid_bdev1", 00:18:12.880 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:12.880 "strip_size_kb": 0, 00:18:12.880 "state": "online", 00:18:12.880 "raid_level": "raid1", 00:18:12.880 "superblock": true, 00:18:12.880 "num_base_bdevs": 2, 00:18:12.880 "num_base_bdevs_discovered": 2, 00:18:12.880 "num_base_bdevs_operational": 2, 00:18:12.880 "process": { 00:18:12.880 "type": "rebuild", 00:18:12.880 "target": "spare", 00:18:12.880 "progress": { 00:18:12.880 "blocks": 2560, 00:18:12.880 "percent": 32 00:18:12.880 } 00:18:12.880 }, 00:18:12.880 "base_bdevs_list": [ 00:18:12.880 { 00:18:12.880 "name": "spare", 00:18:12.880 "uuid": "ec0fd453-064e-5e82-9500-6ca3a1d24a17", 00:18:12.880 "is_configured": true, 00:18:12.880 "data_offset": 256, 00:18:12.880 "data_size": 7936 00:18:12.880 }, 00:18:12.880 { 00:18:12.880 "name": "BaseBdev2", 00:18:12.880 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:12.880 "is_configured": true, 00:18:12.880 "data_offset": 256, 00:18:12.880 "data_size": 7936 00:18:12.880 } 00:18:12.880 ] 00:18:12.880 }' 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.880 [2024-12-06 23:52:24.312234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.880 [2024-12-06 23:52:24.372109] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:12.880 [2024-12-06 23:52:24.372212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.880 [2024-12-06 23:52:24.372245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.880 [2024-12-06 23:52:24.372271] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.880 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.141 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.141 "name": "raid_bdev1", 00:18:13.141 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:13.141 "strip_size_kb": 0, 00:18:13.141 "state": "online", 00:18:13.141 "raid_level": "raid1", 00:18:13.141 "superblock": true, 00:18:13.141 "num_base_bdevs": 2, 00:18:13.141 "num_base_bdevs_discovered": 1, 00:18:13.141 "num_base_bdevs_operational": 1, 00:18:13.141 "base_bdevs_list": [ 00:18:13.141 { 00:18:13.141 "name": null, 00:18:13.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.141 "is_configured": false, 00:18:13.141 "data_offset": 0, 00:18:13.141 "data_size": 7936 00:18:13.141 }, 00:18:13.141 { 00:18:13.141 "name": "BaseBdev2", 00:18:13.141 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:13.141 "is_configured": true, 00:18:13.141 "data_offset": 256, 00:18:13.141 "data_size": 7936 00:18:13.141 } 00:18:13.141 ] 00:18:13.141 }' 00:18:13.141 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.141 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.402 "name": "raid_bdev1", 00:18:13.402 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:13.402 "strip_size_kb": 0, 00:18:13.402 "state": "online", 00:18:13.402 "raid_level": "raid1", 00:18:13.402 "superblock": true, 00:18:13.402 "num_base_bdevs": 2, 00:18:13.402 "num_base_bdevs_discovered": 1, 00:18:13.402 "num_base_bdevs_operational": 1, 00:18:13.402 "base_bdevs_list": [ 00:18:13.402 { 00:18:13.402 "name": null, 00:18:13.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.402 "is_configured": false, 00:18:13.402 "data_offset": 0, 00:18:13.402 "data_size": 7936 00:18:13.402 }, 00:18:13.402 { 00:18:13.402 "name": "BaseBdev2", 00:18:13.402 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:13.402 "is_configured": true, 00:18:13.402 "data_offset": 256, 00:18:13.402 "data_size": 7936 00:18:13.402 } 00:18:13.402 ] 00:18:13.402 }' 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.402 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.663 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.663 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:13.663 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.663 23:52:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.663 [2024-12-06 23:52:24.996976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:13.663 [2024-12-06 23:52:25.011921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:13.663 23:52:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.663 23:52:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:13.663 [2024-12-06 23:52:25.013689] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:14.605 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.605 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.605 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.605 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.605 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.605 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.605 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.605 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.605 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.605 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.605 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.605 "name": "raid_bdev1", 00:18:14.605 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:14.605 "strip_size_kb": 0, 00:18:14.605 "state": "online", 00:18:14.605 "raid_level": "raid1", 00:18:14.605 "superblock": true, 00:18:14.605 "num_base_bdevs": 2, 00:18:14.605 "num_base_bdevs_discovered": 2, 00:18:14.605 "num_base_bdevs_operational": 2, 00:18:14.605 "process": { 00:18:14.605 "type": "rebuild", 00:18:14.605 "target": "spare", 00:18:14.605 "progress": { 00:18:14.605 "blocks": 2560, 00:18:14.605 "percent": 32 00:18:14.606 } 00:18:14.606 }, 00:18:14.606 "base_bdevs_list": [ 00:18:14.606 { 00:18:14.606 "name": "spare", 00:18:14.606 "uuid": "ec0fd453-064e-5e82-9500-6ca3a1d24a17", 00:18:14.606 "is_configured": true, 00:18:14.606 "data_offset": 256, 00:18:14.606 "data_size": 7936 00:18:14.606 }, 00:18:14.606 { 00:18:14.606 "name": "BaseBdev2", 00:18:14.606 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:14.606 "is_configured": true, 00:18:14.606 "data_offset": 256, 00:18:14.606 "data_size": 7936 00:18:14.606 } 00:18:14.606 ] 00:18:14.606 }' 00:18:14.606 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.606 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.606 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:14.881 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=735 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.881 "name": "raid_bdev1", 00:18:14.881 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:14.881 "strip_size_kb": 0, 00:18:14.881 "state": "online", 00:18:14.881 "raid_level": "raid1", 00:18:14.881 "superblock": true, 00:18:14.881 "num_base_bdevs": 2, 00:18:14.881 "num_base_bdevs_discovered": 2, 00:18:14.881 "num_base_bdevs_operational": 2, 00:18:14.881 "process": { 00:18:14.881 "type": "rebuild", 00:18:14.881 "target": "spare", 00:18:14.881 "progress": { 00:18:14.881 "blocks": 2816, 00:18:14.881 "percent": 35 00:18:14.881 } 00:18:14.881 }, 00:18:14.881 "base_bdevs_list": [ 00:18:14.881 { 00:18:14.881 "name": "spare", 00:18:14.881 "uuid": "ec0fd453-064e-5e82-9500-6ca3a1d24a17", 00:18:14.881 "is_configured": true, 00:18:14.881 "data_offset": 256, 00:18:14.881 "data_size": 7936 00:18:14.881 }, 00:18:14.881 { 00:18:14.881 "name": "BaseBdev2", 00:18:14.881 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:14.881 "is_configured": true, 00:18:14.881 "data_offset": 256, 00:18:14.881 "data_size": 7936 00:18:14.881 } 00:18:14.881 ] 00:18:14.881 }' 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.881 23:52:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.866 "name": "raid_bdev1", 00:18:15.866 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:15.866 "strip_size_kb": 0, 00:18:15.866 "state": "online", 00:18:15.866 "raid_level": "raid1", 00:18:15.866 "superblock": true, 00:18:15.866 "num_base_bdevs": 2, 00:18:15.866 "num_base_bdevs_discovered": 2, 00:18:15.866 "num_base_bdevs_operational": 2, 00:18:15.866 "process": { 00:18:15.866 "type": "rebuild", 00:18:15.866 "target": "spare", 00:18:15.866 "progress": { 00:18:15.866 "blocks": 5632, 00:18:15.866 "percent": 70 00:18:15.866 } 00:18:15.866 }, 00:18:15.866 "base_bdevs_list": [ 00:18:15.866 { 00:18:15.866 "name": "spare", 00:18:15.866 "uuid": "ec0fd453-064e-5e82-9500-6ca3a1d24a17", 00:18:15.866 "is_configured": true, 00:18:15.866 "data_offset": 256, 00:18:15.866 "data_size": 7936 00:18:15.866 }, 00:18:15.866 { 00:18:15.866 "name": "BaseBdev2", 00:18:15.866 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:15.866 "is_configured": true, 00:18:15.866 "data_offset": 256, 00:18:15.866 "data_size": 7936 00:18:15.866 } 00:18:15.866 ] 00:18:15.866 }' 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.866 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.126 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.126 23:52:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.697 [2024-12-06 23:52:28.125135] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:16.697 [2024-12-06 23:52:28.125199] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:16.697 [2024-12-06 23:52:28.125289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.957 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.957 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.957 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.957 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.957 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.957 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.957 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.957 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.957 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.957 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.957 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.957 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.957 "name": "raid_bdev1", 00:18:16.957 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:16.957 "strip_size_kb": 0, 00:18:16.957 "state": "online", 00:18:16.957 "raid_level": "raid1", 00:18:16.957 "superblock": true, 00:18:16.957 "num_base_bdevs": 2, 00:18:16.958 "num_base_bdevs_discovered": 2, 00:18:16.958 "num_base_bdevs_operational": 2, 00:18:16.958 "base_bdevs_list": [ 00:18:16.958 { 00:18:16.958 "name": "spare", 00:18:16.958 "uuid": "ec0fd453-064e-5e82-9500-6ca3a1d24a17", 00:18:16.958 "is_configured": true, 00:18:16.958 "data_offset": 256, 00:18:16.958 "data_size": 7936 00:18:16.958 }, 00:18:16.958 { 00:18:16.958 "name": "BaseBdev2", 00:18:16.958 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:16.958 "is_configured": true, 00:18:16.958 "data_offset": 256, 00:18:16.958 "data_size": 7936 00:18:16.958 } 00:18:16.958 ] 00:18:16.958 }' 00:18:16.958 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.218 "name": "raid_bdev1", 00:18:17.218 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:17.218 "strip_size_kb": 0, 00:18:17.218 "state": "online", 00:18:17.218 "raid_level": "raid1", 00:18:17.218 "superblock": true, 00:18:17.218 "num_base_bdevs": 2, 00:18:17.218 "num_base_bdevs_discovered": 2, 00:18:17.218 "num_base_bdevs_operational": 2, 00:18:17.218 "base_bdevs_list": [ 00:18:17.218 { 00:18:17.218 "name": "spare", 00:18:17.218 "uuid": "ec0fd453-064e-5e82-9500-6ca3a1d24a17", 00:18:17.218 "is_configured": true, 00:18:17.218 "data_offset": 256, 00:18:17.218 "data_size": 7936 00:18:17.218 }, 00:18:17.218 { 00:18:17.218 "name": "BaseBdev2", 00:18:17.218 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:17.218 "is_configured": true, 00:18:17.218 "data_offset": 256, 00:18:17.218 "data_size": 7936 00:18:17.218 } 00:18:17.218 ] 00:18:17.218 }' 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.218 "name": "raid_bdev1", 00:18:17.218 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:17.218 "strip_size_kb": 0, 00:18:17.218 "state": "online", 00:18:17.218 "raid_level": "raid1", 00:18:17.218 "superblock": true, 00:18:17.218 "num_base_bdevs": 2, 00:18:17.218 "num_base_bdevs_discovered": 2, 00:18:17.218 "num_base_bdevs_operational": 2, 00:18:17.218 "base_bdevs_list": [ 00:18:17.218 { 00:18:17.218 "name": "spare", 00:18:17.218 "uuid": "ec0fd453-064e-5e82-9500-6ca3a1d24a17", 00:18:17.218 "is_configured": true, 00:18:17.218 "data_offset": 256, 00:18:17.218 "data_size": 7936 00:18:17.218 }, 00:18:17.218 { 00:18:17.218 "name": "BaseBdev2", 00:18:17.218 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:17.218 "is_configured": true, 00:18:17.218 "data_offset": 256, 00:18:17.218 "data_size": 7936 00:18:17.218 } 00:18:17.218 ] 00:18:17.218 }' 00:18:17.218 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.219 23:52:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.788 [2024-12-06 23:52:29.160783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.788 [2024-12-06 23:52:29.160866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.788 [2024-12-06 23:52:29.160943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.788 [2024-12-06 23:52:29.161020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.788 [2024-12-06 23:52:29.161029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.788 [2024-12-06 23:52:29.236736] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:17.788 [2024-12-06 23:52:29.236785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.788 [2024-12-06 23:52:29.236805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:17.788 [2024-12-06 23:52:29.236814] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.788 [2024-12-06 23:52:29.238674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.788 [2024-12-06 23:52:29.238777] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:17.788 [2024-12-06 23:52:29.238838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:17.788 [2024-12-06 23:52:29.238890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.788 [2024-12-06 23:52:29.239005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:17.788 spare 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.788 [2024-12-06 23:52:29.338895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:17.788 [2024-12-06 23:52:29.338924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:17.788 [2024-12-06 23:52:29.338999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:17.788 [2024-12-06 23:52:29.339069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:17.788 [2024-12-06 23:52:29.339078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:17.788 [2024-12-06 23:52:29.339150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.788 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.048 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.048 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.048 "name": "raid_bdev1", 00:18:18.048 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:18.048 "strip_size_kb": 0, 00:18:18.048 "state": "online", 00:18:18.048 "raid_level": "raid1", 00:18:18.048 "superblock": true, 00:18:18.048 "num_base_bdevs": 2, 00:18:18.048 "num_base_bdevs_discovered": 2, 00:18:18.048 "num_base_bdevs_operational": 2, 00:18:18.048 "base_bdevs_list": [ 00:18:18.048 { 00:18:18.048 "name": "spare", 00:18:18.048 "uuid": "ec0fd453-064e-5e82-9500-6ca3a1d24a17", 00:18:18.048 "is_configured": true, 00:18:18.048 "data_offset": 256, 00:18:18.048 "data_size": 7936 00:18:18.048 }, 00:18:18.048 { 00:18:18.048 "name": "BaseBdev2", 00:18:18.048 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:18.048 "is_configured": true, 00:18:18.048 "data_offset": 256, 00:18:18.048 "data_size": 7936 00:18:18.048 } 00:18:18.048 ] 00:18:18.048 }' 00:18:18.048 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.048 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.308 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.308 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.308 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:18.308 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:18.308 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.308 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.308 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.308 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.308 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.308 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.308 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.308 "name": "raid_bdev1", 00:18:18.308 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:18.308 "strip_size_kb": 0, 00:18:18.308 "state": "online", 00:18:18.308 "raid_level": "raid1", 00:18:18.308 "superblock": true, 00:18:18.308 "num_base_bdevs": 2, 00:18:18.308 "num_base_bdevs_discovered": 2, 00:18:18.308 "num_base_bdevs_operational": 2, 00:18:18.308 "base_bdevs_list": [ 00:18:18.308 { 00:18:18.308 "name": "spare", 00:18:18.308 "uuid": "ec0fd453-064e-5e82-9500-6ca3a1d24a17", 00:18:18.308 "is_configured": true, 00:18:18.308 "data_offset": 256, 00:18:18.308 "data_size": 7936 00:18:18.308 }, 00:18:18.308 { 00:18:18.308 "name": "BaseBdev2", 00:18:18.308 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:18.308 "is_configured": true, 00:18:18.308 "data_offset": 256, 00:18:18.308 "data_size": 7936 00:18:18.308 } 00:18:18.308 ] 00:18:18.308 }' 00:18:18.308 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.568 [2024-12-06 23:52:29.995548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.568 23:52:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.568 "name": "raid_bdev1", 00:18:18.568 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:18.568 "strip_size_kb": 0, 00:18:18.568 "state": "online", 00:18:18.568 "raid_level": "raid1", 00:18:18.568 "superblock": true, 00:18:18.568 "num_base_bdevs": 2, 00:18:18.568 "num_base_bdevs_discovered": 1, 00:18:18.568 "num_base_bdevs_operational": 1, 00:18:18.568 "base_bdevs_list": [ 00:18:18.568 { 00:18:18.568 "name": null, 00:18:18.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.568 "is_configured": false, 00:18:18.568 "data_offset": 0, 00:18:18.568 "data_size": 7936 00:18:18.568 }, 00:18:18.568 { 00:18:18.568 "name": "BaseBdev2", 00:18:18.568 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:18.568 "is_configured": true, 00:18:18.568 "data_offset": 256, 00:18:18.568 "data_size": 7936 00:18:18.568 } 00:18:18.568 ] 00:18:18.568 }' 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.568 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.137 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:19.137 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.137 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.137 [2024-12-06 23:52:30.430770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.137 [2024-12-06 23:52:30.430967] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:19.137 [2024-12-06 23:52:30.431045] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:19.137 [2024-12-06 23:52:30.431100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.137 [2024-12-06 23:52:30.445912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:19.137 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.137 23:52:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:19.137 [2024-12-06 23:52:30.447763] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:20.077 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.077 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.077 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.077 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.077 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.077 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.077 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.077 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.077 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.077 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.077 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.077 "name": "raid_bdev1", 00:18:20.077 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:20.077 "strip_size_kb": 0, 00:18:20.077 "state": "online", 00:18:20.077 "raid_level": "raid1", 00:18:20.077 "superblock": true, 00:18:20.077 "num_base_bdevs": 2, 00:18:20.077 "num_base_bdevs_discovered": 2, 00:18:20.077 "num_base_bdevs_operational": 2, 00:18:20.077 "process": { 00:18:20.077 "type": "rebuild", 00:18:20.077 "target": "spare", 00:18:20.077 "progress": { 00:18:20.077 "blocks": 2560, 00:18:20.078 "percent": 32 00:18:20.078 } 00:18:20.078 }, 00:18:20.078 "base_bdevs_list": [ 00:18:20.078 { 00:18:20.078 "name": "spare", 00:18:20.078 "uuid": "ec0fd453-064e-5e82-9500-6ca3a1d24a17", 00:18:20.078 "is_configured": true, 00:18:20.078 "data_offset": 256, 00:18:20.078 "data_size": 7936 00:18:20.078 }, 00:18:20.078 { 00:18:20.078 "name": "BaseBdev2", 00:18:20.078 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:20.078 "is_configured": true, 00:18:20.078 "data_offset": 256, 00:18:20.078 "data_size": 7936 00:18:20.078 } 00:18:20.078 ] 00:18:20.078 }' 00:18:20.078 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.078 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.078 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.078 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.078 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:20.078 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.078 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.078 [2024-12-06 23:52:31.592537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.337 [2024-12-06 23:52:31.652385] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:20.337 [2024-12-06 23:52:31.652486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.337 [2024-12-06 23:52:31.652501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.337 [2024-12-06 23:52:31.652510] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.337 "name": "raid_bdev1", 00:18:20.337 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:20.337 "strip_size_kb": 0, 00:18:20.337 "state": "online", 00:18:20.337 "raid_level": "raid1", 00:18:20.337 "superblock": true, 00:18:20.337 "num_base_bdevs": 2, 00:18:20.337 "num_base_bdevs_discovered": 1, 00:18:20.337 "num_base_bdevs_operational": 1, 00:18:20.337 "base_bdevs_list": [ 00:18:20.337 { 00:18:20.337 "name": null, 00:18:20.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.337 "is_configured": false, 00:18:20.337 "data_offset": 0, 00:18:20.337 "data_size": 7936 00:18:20.337 }, 00:18:20.337 { 00:18:20.337 "name": "BaseBdev2", 00:18:20.337 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:20.337 "is_configured": true, 00:18:20.337 "data_offset": 256, 00:18:20.337 "data_size": 7936 00:18:20.337 } 00:18:20.337 ] 00:18:20.337 }' 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.337 23:52:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.595 23:52:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:20.595 23:52:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.595 23:52:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.595 [2024-12-06 23:52:32.080296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:20.595 [2024-12-06 23:52:32.080419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.595 [2024-12-06 23:52:32.080461] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:20.595 [2024-12-06 23:52:32.080492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.595 [2024-12-06 23:52:32.080703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.595 [2024-12-06 23:52:32.080754] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:20.595 [2024-12-06 23:52:32.080823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:20.595 [2024-12-06 23:52:32.080861] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.595 [2024-12-06 23:52:32.080897] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:20.595 [2024-12-06 23:52:32.080970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.595 [2024-12-06 23:52:32.095435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:20.595 spare 00:18:20.595 23:52:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.595 [2024-12-06 23:52:32.097292] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:20.595 23:52:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.974 "name": "raid_bdev1", 00:18:21.974 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:21.974 "strip_size_kb": 0, 00:18:21.974 "state": "online", 00:18:21.974 "raid_level": "raid1", 00:18:21.974 "superblock": true, 00:18:21.974 "num_base_bdevs": 2, 00:18:21.974 "num_base_bdevs_discovered": 2, 00:18:21.974 "num_base_bdevs_operational": 2, 00:18:21.974 "process": { 00:18:21.974 "type": "rebuild", 00:18:21.974 "target": "spare", 00:18:21.974 "progress": { 00:18:21.974 "blocks": 2560, 00:18:21.974 "percent": 32 00:18:21.974 } 00:18:21.974 }, 00:18:21.974 "base_bdevs_list": [ 00:18:21.974 { 00:18:21.974 "name": "spare", 00:18:21.974 "uuid": "ec0fd453-064e-5e82-9500-6ca3a1d24a17", 00:18:21.974 "is_configured": true, 00:18:21.974 "data_offset": 256, 00:18:21.974 "data_size": 7936 00:18:21.974 }, 00:18:21.974 { 00:18:21.974 "name": "BaseBdev2", 00:18:21.974 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:21.974 "is_configured": true, 00:18:21.974 "data_offset": 256, 00:18:21.974 "data_size": 7936 00:18:21.974 } 00:18:21.974 ] 00:18:21.974 }' 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.974 [2024-12-06 23:52:33.256995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.974 [2024-12-06 23:52:33.301945] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:21.974 [2024-12-06 23:52:33.302058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.974 [2024-12-06 23:52:33.302077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.974 [2024-12-06 23:52:33.302084] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.974 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.974 "name": "raid_bdev1", 00:18:21.975 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:21.975 "strip_size_kb": 0, 00:18:21.975 "state": "online", 00:18:21.975 "raid_level": "raid1", 00:18:21.975 "superblock": true, 00:18:21.975 "num_base_bdevs": 2, 00:18:21.975 "num_base_bdevs_discovered": 1, 00:18:21.975 "num_base_bdevs_operational": 1, 00:18:21.975 "base_bdevs_list": [ 00:18:21.975 { 00:18:21.975 "name": null, 00:18:21.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.975 "is_configured": false, 00:18:21.975 "data_offset": 0, 00:18:21.975 "data_size": 7936 00:18:21.975 }, 00:18:21.975 { 00:18:21.975 "name": "BaseBdev2", 00:18:21.975 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:21.975 "is_configured": true, 00:18:21.975 "data_offset": 256, 00:18:21.975 "data_size": 7936 00:18:21.975 } 00:18:21.975 ] 00:18:21.975 }' 00:18:21.975 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.975 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.236 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.236 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.236 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.236 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.236 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.236 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.236 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.236 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.236 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.236 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.497 "name": "raid_bdev1", 00:18:22.497 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:22.497 "strip_size_kb": 0, 00:18:22.497 "state": "online", 00:18:22.497 "raid_level": "raid1", 00:18:22.497 "superblock": true, 00:18:22.497 "num_base_bdevs": 2, 00:18:22.497 "num_base_bdevs_discovered": 1, 00:18:22.497 "num_base_bdevs_operational": 1, 00:18:22.497 "base_bdevs_list": [ 00:18:22.497 { 00:18:22.497 "name": null, 00:18:22.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.497 "is_configured": false, 00:18:22.497 "data_offset": 0, 00:18:22.497 "data_size": 7936 00:18:22.497 }, 00:18:22.497 { 00:18:22.497 "name": "BaseBdev2", 00:18:22.497 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:22.497 "is_configured": true, 00:18:22.497 "data_offset": 256, 00:18:22.497 "data_size": 7936 00:18:22.497 } 00:18:22.497 ] 00:18:22.497 }' 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.497 [2024-12-06 23:52:33.893581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:22.497 [2024-12-06 23:52:33.893636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.497 [2024-12-06 23:52:33.893656] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:22.497 [2024-12-06 23:52:33.893676] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.497 [2024-12-06 23:52:33.893836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.497 [2024-12-06 23:52:33.893851] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:22.497 [2024-12-06 23:52:33.893897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:22.497 [2024-12-06 23:52:33.893909] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:22.497 [2024-12-06 23:52:33.893918] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:22.497 [2024-12-06 23:52:33.893927] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:22.497 BaseBdev1 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.497 23:52:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:23.438 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.438 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.438 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.438 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.438 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.438 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.438 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.438 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.439 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.439 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.439 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.439 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.439 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.439 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.439 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.439 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.439 "name": "raid_bdev1", 00:18:23.439 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:23.439 "strip_size_kb": 0, 00:18:23.439 "state": "online", 00:18:23.439 "raid_level": "raid1", 00:18:23.439 "superblock": true, 00:18:23.439 "num_base_bdevs": 2, 00:18:23.439 "num_base_bdevs_discovered": 1, 00:18:23.439 "num_base_bdevs_operational": 1, 00:18:23.439 "base_bdevs_list": [ 00:18:23.439 { 00:18:23.439 "name": null, 00:18:23.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.439 "is_configured": false, 00:18:23.439 "data_offset": 0, 00:18:23.439 "data_size": 7936 00:18:23.439 }, 00:18:23.439 { 00:18:23.439 "name": "BaseBdev2", 00:18:23.439 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:23.439 "is_configured": true, 00:18:23.439 "data_offset": 256, 00:18:23.439 "data_size": 7936 00:18:23.439 } 00:18:23.439 ] 00:18:23.439 }' 00:18:23.439 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.439 23:52:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.008 "name": "raid_bdev1", 00:18:24.008 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:24.008 "strip_size_kb": 0, 00:18:24.008 "state": "online", 00:18:24.008 "raid_level": "raid1", 00:18:24.008 "superblock": true, 00:18:24.008 "num_base_bdevs": 2, 00:18:24.008 "num_base_bdevs_discovered": 1, 00:18:24.008 "num_base_bdevs_operational": 1, 00:18:24.008 "base_bdevs_list": [ 00:18:24.008 { 00:18:24.008 "name": null, 00:18:24.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.008 "is_configured": false, 00:18:24.008 "data_offset": 0, 00:18:24.008 "data_size": 7936 00:18:24.008 }, 00:18:24.008 { 00:18:24.008 "name": "BaseBdev2", 00:18:24.008 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:24.008 "is_configured": true, 00:18:24.008 "data_offset": 256, 00:18:24.008 "data_size": 7936 00:18:24.008 } 00:18:24.008 ] 00:18:24.008 }' 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.008 [2024-12-06 23:52:35.486938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.008 [2024-12-06 23:52:35.487109] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:24.008 [2024-12-06 23:52:35.487186] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:24.008 request: 00:18:24.008 { 00:18:24.008 "base_bdev": "BaseBdev1", 00:18:24.008 "raid_bdev": "raid_bdev1", 00:18:24.008 "method": "bdev_raid_add_base_bdev", 00:18:24.008 "req_id": 1 00:18:24.008 } 00:18:24.008 Got JSON-RPC error response 00:18:24.008 response: 00:18:24.008 { 00:18:24.008 "code": -22, 00:18:24.008 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:24.008 } 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.008 23:52:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.957 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.217 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.217 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.217 "name": "raid_bdev1", 00:18:25.217 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:25.217 "strip_size_kb": 0, 00:18:25.217 "state": "online", 00:18:25.217 "raid_level": "raid1", 00:18:25.217 "superblock": true, 00:18:25.217 "num_base_bdevs": 2, 00:18:25.217 "num_base_bdevs_discovered": 1, 00:18:25.217 "num_base_bdevs_operational": 1, 00:18:25.217 "base_bdevs_list": [ 00:18:25.217 { 00:18:25.217 "name": null, 00:18:25.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.217 "is_configured": false, 00:18:25.217 "data_offset": 0, 00:18:25.217 "data_size": 7936 00:18:25.217 }, 00:18:25.217 { 00:18:25.217 "name": "BaseBdev2", 00:18:25.217 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:25.217 "is_configured": true, 00:18:25.217 "data_offset": 256, 00:18:25.217 "data_size": 7936 00:18:25.217 } 00:18:25.217 ] 00:18:25.217 }' 00:18:25.217 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.217 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.476 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:25.476 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.476 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:25.476 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:25.476 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.476 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.476 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.476 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.476 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.476 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.476 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.476 "name": "raid_bdev1", 00:18:25.476 "uuid": "36d06fdf-6261-42a4-8248-1e84e86c6289", 00:18:25.476 "strip_size_kb": 0, 00:18:25.476 "state": "online", 00:18:25.476 "raid_level": "raid1", 00:18:25.476 "superblock": true, 00:18:25.476 "num_base_bdevs": 2, 00:18:25.476 "num_base_bdevs_discovered": 1, 00:18:25.476 "num_base_bdevs_operational": 1, 00:18:25.476 "base_bdevs_list": [ 00:18:25.476 { 00:18:25.476 "name": null, 00:18:25.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.476 "is_configured": false, 00:18:25.476 "data_offset": 0, 00:18:25.476 "data_size": 7936 00:18:25.476 }, 00:18:25.476 { 00:18:25.476 "name": "BaseBdev2", 00:18:25.476 "uuid": "5ceb7091-74f9-5785-8b7e-734e7a87471b", 00:18:25.476 "is_configured": true, 00:18:25.476 "data_offset": 256, 00:18:25.476 "data_size": 7936 00:18:25.476 } 00:18:25.476 ] 00:18:25.476 }' 00:18:25.476 23:52:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.476 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:25.476 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.735 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:25.735 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88937 00:18:25.735 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88937 ']' 00:18:25.735 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88937 00:18:25.735 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:25.735 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.735 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88937 00:18:25.735 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.735 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.735 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88937' 00:18:25.735 killing process with pid 88937 00:18:25.735 Received shutdown signal, test time was about 60.000000 seconds 00:18:25.735 00:18:25.735 Latency(us) 00:18:25.735 [2024-12-06T23:52:37.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.735 [2024-12-06T23:52:37.298Z] =================================================================================================================== 00:18:25.735 [2024-12-06T23:52:37.298Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.735 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88937 00:18:25.735 [2024-12-06 23:52:37.117980] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.735 [2024-12-06 23:52:37.118082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.735 [2024-12-06 23:52:37.118120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.735 [2024-12-06 23:52:37.118131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:25.735 23:52:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88937 00:18:25.995 [2024-12-06 23:52:37.396884] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.933 23:52:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:26.933 00:18:26.933 real 0m17.364s 00:18:26.933 user 0m22.733s 00:18:26.933 sys 0m1.673s 00:18:26.933 23:52:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.933 23:52:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.933 ************************************ 00:18:26.933 END TEST raid_rebuild_test_sb_md_interleaved 00:18:26.933 ************************************ 00:18:26.933 23:52:38 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:26.933 23:52:38 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:26.933 23:52:38 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88937 ']' 00:18:26.933 23:52:38 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88937 00:18:27.193 23:52:38 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:27.193 00:18:27.193 real 11m57.008s 00:18:27.193 user 16m5.355s 00:18:27.193 sys 1m55.317s 00:18:27.193 23:52:38 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.193 23:52:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.193 ************************************ 00:18:27.193 END TEST bdev_raid 00:18:27.193 ************************************ 00:18:27.193 23:52:38 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:27.193 23:52:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:27.193 23:52:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.193 23:52:38 -- common/autotest_common.sh@10 -- # set +x 00:18:27.193 ************************************ 00:18:27.193 START TEST spdkcli_raid 00:18:27.193 ************************************ 00:18:27.194 23:52:38 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:27.194 * Looking for test storage... 00:18:27.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:27.194 23:52:38 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:27.194 23:52:38 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:27.194 23:52:38 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:27.455 23:52:38 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:27.455 23:52:38 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:27.455 23:52:38 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:27.455 23:52:38 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:27.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.455 --rc genhtml_branch_coverage=1 00:18:27.455 --rc genhtml_function_coverage=1 00:18:27.455 --rc genhtml_legend=1 00:18:27.455 --rc geninfo_all_blocks=1 00:18:27.455 --rc geninfo_unexecuted_blocks=1 00:18:27.455 00:18:27.455 ' 00:18:27.455 23:52:38 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:27.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.455 --rc genhtml_branch_coverage=1 00:18:27.455 --rc genhtml_function_coverage=1 00:18:27.455 --rc genhtml_legend=1 00:18:27.455 --rc geninfo_all_blocks=1 00:18:27.455 --rc geninfo_unexecuted_blocks=1 00:18:27.455 00:18:27.455 ' 00:18:27.455 23:52:38 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:27.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.455 --rc genhtml_branch_coverage=1 00:18:27.455 --rc genhtml_function_coverage=1 00:18:27.455 --rc genhtml_legend=1 00:18:27.455 --rc geninfo_all_blocks=1 00:18:27.455 --rc geninfo_unexecuted_blocks=1 00:18:27.455 00:18:27.455 ' 00:18:27.455 23:52:38 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:27.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.455 --rc genhtml_branch_coverage=1 00:18:27.455 --rc genhtml_function_coverage=1 00:18:27.455 --rc genhtml_legend=1 00:18:27.455 --rc geninfo_all_blocks=1 00:18:27.455 --rc geninfo_unexecuted_blocks=1 00:18:27.455 00:18:27.455 ' 00:18:27.455 23:52:38 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:27.455 23:52:38 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:27.455 23:52:38 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:27.455 23:52:38 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:27.455 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:27.455 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:27.455 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:27.455 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:27.456 23:52:38 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:27.456 23:52:38 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.456 23:52:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89620 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:27.456 23:52:38 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89620 00:18:27.456 23:52:38 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89620 ']' 00:18:27.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.456 23:52:38 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.456 23:52:38 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.456 23:52:38 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.456 23:52:38 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.456 23:52:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.456 [2024-12-06 23:52:38.978893] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:18:27.456 [2024-12-06 23:52:38.979011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89620 ] 00:18:27.716 [2024-12-06 23:52:39.157990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:27.716 [2024-12-06 23:52:39.267920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.716 [2024-12-06 23:52:39.267953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.653 23:52:40 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.653 23:52:40 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:28.653 23:52:40 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:28.653 23:52:40 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.653 23:52:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.653 23:52:40 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:28.653 23:52:40 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.653 23:52:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.653 23:52:40 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:28.653 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:28.653 ' 00:18:30.561 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:30.561 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:30.561 23:52:41 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:30.561 23:52:41 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:30.561 23:52:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.561 23:52:41 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:30.561 23:52:41 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.561 23:52:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.561 23:52:41 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:30.561 ' 00:18:31.500 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:31.500 23:52:43 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:31.500 23:52:43 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.500 23:52:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.500 23:52:43 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:31.500 23:52:43 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.760 23:52:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.760 23:52:43 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:31.760 23:52:43 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:32.020 23:52:43 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:32.281 23:52:43 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:32.281 23:52:43 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:32.281 23:52:43 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.281 23:52:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.281 23:52:43 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:32.281 23:52:43 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.281 23:52:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.281 23:52:43 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:32.281 ' 00:18:33.222 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:33.222 23:52:44 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:33.222 23:52:44 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:33.222 23:52:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.482 23:52:44 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:33.482 23:52:44 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.482 23:52:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.482 23:52:44 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:33.482 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:33.482 ' 00:18:34.866 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:34.866 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:34.866 23:52:46 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:34.866 23:52:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.866 23:52:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:34.866 23:52:46 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89620 00:18:34.866 23:52:46 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89620 ']' 00:18:34.866 23:52:46 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89620 00:18:34.866 23:52:46 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:34.866 23:52:46 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.866 23:52:46 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89620 00:18:34.866 killing process with pid 89620 00:18:34.866 23:52:46 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.866 23:52:46 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.866 23:52:46 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89620' 00:18:34.866 23:52:46 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89620 00:18:34.866 23:52:46 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89620 00:18:37.408 23:52:48 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:37.408 23:52:48 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89620 ']' 00:18:37.408 23:52:48 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89620 00:18:37.408 23:52:48 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89620 ']' 00:18:37.408 23:52:48 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89620 00:18:37.408 Process with pid 89620 is not found 00:18:37.408 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89620) - No such process 00:18:37.408 23:52:48 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89620 is not found' 00:18:37.408 23:52:48 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:37.408 23:52:48 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:37.408 23:52:48 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:37.408 23:52:48 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:37.408 ************************************ 00:18:37.408 END TEST spdkcli_raid 00:18:37.408 ************************************ 00:18:37.408 00:18:37.408 real 0m10.023s 00:18:37.408 user 0m20.561s 00:18:37.408 sys 0m1.180s 00:18:37.408 23:52:48 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.408 23:52:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.408 23:52:48 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:37.408 23:52:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:37.408 23:52:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.408 23:52:48 -- common/autotest_common.sh@10 -- # set +x 00:18:37.408 ************************************ 00:18:37.408 START TEST blockdev_raid5f 00:18:37.408 ************************************ 00:18:37.408 23:52:48 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:37.408 * Looking for test storage... 00:18:37.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:37.408 23:52:48 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:37.408 23:52:48 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:18:37.408 23:52:48 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:37.408 23:52:48 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.408 23:52:48 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:37.408 23:52:48 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.408 23:52:48 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:37.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.408 --rc genhtml_branch_coverage=1 00:18:37.409 --rc genhtml_function_coverage=1 00:18:37.409 --rc genhtml_legend=1 00:18:37.409 --rc geninfo_all_blocks=1 00:18:37.409 --rc geninfo_unexecuted_blocks=1 00:18:37.409 00:18:37.409 ' 00:18:37.409 23:52:48 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:37.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.409 --rc genhtml_branch_coverage=1 00:18:37.409 --rc genhtml_function_coverage=1 00:18:37.409 --rc genhtml_legend=1 00:18:37.409 --rc geninfo_all_blocks=1 00:18:37.409 --rc geninfo_unexecuted_blocks=1 00:18:37.409 00:18:37.409 ' 00:18:37.409 23:52:48 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:37.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.409 --rc genhtml_branch_coverage=1 00:18:37.409 --rc genhtml_function_coverage=1 00:18:37.409 --rc genhtml_legend=1 00:18:37.409 --rc geninfo_all_blocks=1 00:18:37.409 --rc geninfo_unexecuted_blocks=1 00:18:37.409 00:18:37.409 ' 00:18:37.409 23:52:48 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:37.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.409 --rc genhtml_branch_coverage=1 00:18:37.409 --rc genhtml_function_coverage=1 00:18:37.409 --rc genhtml_legend=1 00:18:37.409 --rc geninfo_all_blocks=1 00:18:37.409 --rc geninfo_unexecuted_blocks=1 00:18:37.409 00:18:37.409 ' 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89904 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:37.409 23:52:48 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89904 00:18:37.409 23:52:48 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89904 ']' 00:18:37.409 23:52:48 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.409 23:52:48 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.409 23:52:48 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.409 23:52:48 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.409 23:52:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:37.669 [2024-12-06 23:52:49.050989] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:18:37.669 [2024-12-06 23:52:49.051189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89904 ] 00:18:37.669 [2024-12-06 23:52:49.229443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.928 [2024-12-06 23:52:49.333410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:38.867 Malloc0 00:18:38.867 Malloc1 00:18:38.867 Malloc2 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:38.867 23:52:50 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "b541a26c-e60c-4eac-a4e5-b20a1b7de806"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b541a26c-e60c-4eac-a4e5-b20a1b7de806",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "b541a26c-e60c-4eac-a4e5-b20a1b7de806",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1b5b2f66-2da8-4fa7-a50b-bca3420df31e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "436d9005-5fa7-4560-a027-084a641ccee4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "fd033024-6a21-4a70-b96f-944175baa44b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:38.867 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:39.126 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:39.126 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:18:39.126 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:39.126 23:52:50 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 89904 00:18:39.126 23:52:50 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89904 ']' 00:18:39.126 23:52:50 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89904 00:18:39.126 23:52:50 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:39.126 23:52:50 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.126 23:52:50 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89904 00:18:39.126 killing process with pid 89904 00:18:39.126 23:52:50 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.126 23:52:50 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.126 23:52:50 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89904' 00:18:39.126 23:52:50 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89904 00:18:39.126 23:52:50 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89904 00:18:41.666 23:52:52 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:41.666 23:52:52 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:41.666 23:52:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:41.667 23:52:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.667 23:52:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:41.667 ************************************ 00:18:41.667 START TEST bdev_hello_world 00:18:41.667 ************************************ 00:18:41.667 23:52:52 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:41.667 [2024-12-06 23:52:53.071281] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:18:41.667 [2024-12-06 23:52:53.071462] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89968 ] 00:18:41.926 [2024-12-06 23:52:53.242303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.927 [2024-12-06 23:52:53.346156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.497 [2024-12-06 23:52:53.870580] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:42.497 [2024-12-06 23:52:53.870628] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:42.497 [2024-12-06 23:52:53.870644] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:42.497 [2024-12-06 23:52:53.871188] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:42.497 [2024-12-06 23:52:53.871353] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:42.497 [2024-12-06 23:52:53.871370] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:42.497 [2024-12-06 23:52:53.871419] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:42.497 00:18:42.497 [2024-12-06 23:52:53.871437] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:43.879 00:18:43.879 real 0m2.187s 00:18:43.879 user 0m1.830s 00:18:43.879 sys 0m0.235s 00:18:43.879 ************************************ 00:18:43.879 END TEST bdev_hello_world 00:18:43.879 ************************************ 00:18:43.879 23:52:55 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.879 23:52:55 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:43.879 23:52:55 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:18:43.879 23:52:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.879 23:52:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.879 23:52:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:43.879 ************************************ 00:18:43.879 START TEST bdev_bounds 00:18:43.879 ************************************ 00:18:43.879 23:52:55 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:43.879 23:52:55 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90008 00:18:43.879 23:52:55 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:43.879 23:52:55 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:43.879 Process bdevio pid: 90008 00:18:43.879 23:52:55 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90008' 00:18:43.879 23:52:55 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90008 00:18:43.879 23:52:55 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90008 ']' 00:18:43.879 23:52:55 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.879 23:52:55 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.879 23:52:55 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.879 23:52:55 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.879 23:52:55 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:43.879 [2024-12-06 23:52:55.356725] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:18:43.879 [2024-12-06 23:52:55.356900] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90008 ] 00:18:44.138 [2024-12-06 23:52:55.536095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:44.138 [2024-12-06 23:52:55.644373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.138 [2024-12-06 23:52:55.644598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.138 [2024-12-06 23:52:55.644606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.707 23:52:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.707 23:52:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:44.707 23:52:56 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:44.968 I/O targets: 00:18:44.968 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:44.968 00:18:44.968 00:18:44.968 CUnit - A unit testing framework for C - Version 2.1-3 00:18:44.968 http://cunit.sourceforge.net/ 00:18:44.968 00:18:44.968 00:18:44.968 Suite: bdevio tests on: raid5f 00:18:44.968 Test: blockdev write read block ...passed 00:18:44.968 Test: blockdev write zeroes read block ...passed 00:18:44.968 Test: blockdev write zeroes read no split ...passed 00:18:44.968 Test: blockdev write zeroes read split ...passed 00:18:45.230 Test: blockdev write zeroes read split partial ...passed 00:18:45.230 Test: blockdev reset ...passed 00:18:45.230 Test: blockdev write read 8 blocks ...passed 00:18:45.230 Test: blockdev write read size > 128k ...passed 00:18:45.230 Test: blockdev write read invalid size ...passed 00:18:45.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:45.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:45.230 Test: blockdev write read max offset ...passed 00:18:45.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:45.230 Test: blockdev writev readv 8 blocks ...passed 00:18:45.230 Test: blockdev writev readv 30 x 1block ...passed 00:18:45.230 Test: blockdev writev readv block ...passed 00:18:45.230 Test: blockdev writev readv size > 128k ...passed 00:18:45.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:45.230 Test: blockdev comparev and writev ...passed 00:18:45.230 Test: blockdev nvme passthru rw ...passed 00:18:45.230 Test: blockdev nvme passthru vendor specific ...passed 00:18:45.230 Test: blockdev nvme admin passthru ...passed 00:18:45.231 Test: blockdev copy ...passed 00:18:45.231 00:18:45.231 Run Summary: Type Total Ran Passed Failed Inactive 00:18:45.231 suites 1 1 n/a 0 0 00:18:45.231 tests 23 23 23 0 0 00:18:45.231 asserts 130 130 130 0 n/a 00:18:45.231 00:18:45.231 Elapsed time = 0.622 seconds 00:18:45.231 0 00:18:45.231 23:52:56 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90008 00:18:45.231 23:52:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90008 ']' 00:18:45.231 23:52:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90008 00:18:45.231 23:52:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:45.231 23:52:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.231 23:52:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90008 00:18:45.231 23:52:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.231 23:52:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.231 23:52:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90008' 00:18:45.231 killing process with pid 90008 00:18:45.231 23:52:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90008 00:18:45.231 23:52:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90008 00:18:46.655 23:52:57 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:46.655 00:18:46.655 real 0m2.708s 00:18:46.655 user 0m6.721s 00:18:46.655 sys 0m0.408s 00:18:46.655 23:52:57 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.655 23:52:57 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:46.655 ************************************ 00:18:46.655 END TEST bdev_bounds 00:18:46.655 ************************************ 00:18:46.655 23:52:58 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:46.655 23:52:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:46.655 23:52:58 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.655 23:52:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:46.655 ************************************ 00:18:46.655 START TEST bdev_nbd 00:18:46.655 ************************************ 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90073 00:18:46.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90073 /var/tmp/spdk-nbd.sock 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90073 ']' 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.655 23:52:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:46.655 [2024-12-06 23:52:58.148560] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:18:46.655 [2024-12-06 23:52:58.148786] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.948 [2024-12-06 23:52:58.330132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.948 [2024-12-06 23:52:58.439774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:47.535 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:47.796 1+0 records in 00:18:47.796 1+0 records out 00:18:47.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470348 s, 8.7 MB/s 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:47.796 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:48.057 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:48.057 { 00:18:48.057 "nbd_device": "/dev/nbd0", 00:18:48.057 "bdev_name": "raid5f" 00:18:48.057 } 00:18:48.057 ]' 00:18:48.057 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:48.057 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:48.057 { 00:18:48.057 "nbd_device": "/dev/nbd0", 00:18:48.057 "bdev_name": "raid5f" 00:18:48.057 } 00:18:48.057 ]' 00:18:48.057 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:48.057 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:48.057 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:48.057 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:48.057 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.057 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:48.057 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.057 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:48.318 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:48.318 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:48.318 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:48.318 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.318 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.318 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:48.318 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:48.318 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.318 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:48.318 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:48.318 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:48.579 23:52:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:48.839 /dev/nbd0 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.839 1+0 records in 00:18:48.839 1+0 records out 00:18:48.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368071 s, 11.1 MB/s 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:48.839 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:49.099 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:49.099 { 00:18:49.099 "nbd_device": "/dev/nbd0", 00:18:49.099 "bdev_name": "raid5f" 00:18:49.099 } 00:18:49.099 ]' 00:18:49.099 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:49.099 { 00:18:49.100 "nbd_device": "/dev/nbd0", 00:18:49.100 "bdev_name": "raid5f" 00:18:49.100 } 00:18:49.100 ]' 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:49.100 256+0 records in 00:18:49.100 256+0 records out 00:18:49.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131468 s, 79.8 MB/s 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:49.100 256+0 records in 00:18:49.100 256+0 records out 00:18:49.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303711 s, 34.5 MB/s 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:49.100 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:49.360 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:49.360 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:49.360 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:49.360 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.360 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.360 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:49.360 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:49.360 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.360 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:49.360 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:49.360 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:49.620 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:49.620 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:49.620 23:53:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:49.620 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:49.880 malloc_lvol_verify 00:18:49.880 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:49.880 4df0c588-e792-4f5f-bac1-2f23fa655e82 00:18:50.140 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:50.140 92c5da03-46e4-4fc8-8917-fab922bb7f40 00:18:50.140 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:50.400 /dev/nbd0 00:18:50.400 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:50.400 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:50.400 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:50.400 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:50.400 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:50.400 mke2fs 1.47.0 (5-Feb-2023) 00:18:50.400 Discarding device blocks: 0/4096 done 00:18:50.400 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:50.400 00:18:50.400 Allocating group tables: 0/1 done 00:18:50.400 Writing inode tables: 0/1 done 00:18:50.400 Creating journal (1024 blocks): done 00:18:50.400 Writing superblocks and filesystem accounting information: 0/1 done 00:18:50.400 00:18:50.400 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:50.400 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:50.400 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:50.400 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:50.400 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:50.400 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:50.400 23:53:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90073 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90073 ']' 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90073 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90073 00:18:50.660 killing process with pid 90073 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90073' 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90073 00:18:50.660 23:53:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90073 00:18:52.045 23:53:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:52.045 00:18:52.045 real 0m5.467s 00:18:52.045 user 0m7.383s 00:18:52.045 sys 0m1.285s 00:18:52.045 ************************************ 00:18:52.045 END TEST bdev_nbd 00:18:52.045 ************************************ 00:18:52.045 23:53:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.045 23:53:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:52.045 23:53:03 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:18:52.045 23:53:03 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:18:52.045 23:53:03 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:18:52.045 23:53:03 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:18:52.045 23:53:03 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:52.045 23:53:03 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.045 23:53:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:52.045 ************************************ 00:18:52.045 START TEST bdev_fio 00:18:52.045 ************************************ 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:52.045 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:52.045 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:52.306 ************************************ 00:18:52.306 START TEST bdev_fio_rw_verify 00:18:52.306 ************************************ 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:52.306 23:53:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:52.567 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:52.568 fio-3.35 00:18:52.568 Starting 1 thread 00:19:04.790 00:19:04.790 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90272: Fri Dec 6 23:53:14 2024 00:19:04.790 read: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(474MiB/10001msec) 00:19:04.790 slat (usec): min=17, max=2446, avg=20.04, stdev= 9.20 00:19:04.790 clat (usec): min=11, max=2772, avg=133.74, stdev=52.08 00:19:04.790 lat (usec): min=31, max=2794, avg=153.79, stdev=53.77 00:19:04.790 clat percentiles (usec): 00:19:04.790 | 50.000th=[ 135], 99.000th=[ 223], 99.900th=[ 457], 99.990th=[ 947], 00:19:04.790 | 99.999th=[ 1516] 00:19:04.790 write: IOPS=12.7k, BW=49.5MiB/s (51.9MB/s)(489MiB/9873msec); 0 zone resets 00:19:04.790 slat (usec): min=7, max=1091, avg=16.26, stdev= 5.38 00:19:04.790 clat (usec): min=60, max=1309, avg=302.98, stdev=40.37 00:19:04.790 lat (usec): min=80, max=1507, avg=319.25, stdev=41.44 00:19:04.790 clat percentiles (usec): 00:19:04.790 | 50.000th=[ 306], 99.000th=[ 375], 99.900th=[ 578], 99.990th=[ 1106], 00:19:04.790 | 99.999th=[ 1270] 00:19:04.790 bw ( KiB/s): min=48856, max=53368, per=98.91%, avg=50142.21, stdev=1186.73, samples=19 00:19:04.790 iops : min=12214, max=13342, avg=12535.53, stdev=296.64, samples=19 00:19:04.790 lat (usec) : 20=0.01%, 50=0.01%, 100=14.75%, 250=39.46%, 500=45.67% 00:19:04.790 lat (usec) : 750=0.08%, 1000=0.03% 00:19:04.790 lat (msec) : 2=0.01%, 4=0.01% 00:19:04.790 cpu : usr=98.69%, sys=0.55%, ctx=26, majf=0, minf=9930 00:19:04.790 IO depths : 1=7.7%, 2=19.9%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.790 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.790 issued rwts: total=121355,125131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.790 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:04.790 00:19:04.790 Run status group 0 (all jobs): 00:19:04.791 READ: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=474MiB (497MB), run=10001-10001msec 00:19:04.791 WRITE: bw=49.5MiB/s (51.9MB/s), 49.5MiB/s-49.5MiB/s (51.9MB/s-51.9MB/s), io=489MiB (513MB), run=9873-9873msec 00:19:05.051 ----------------------------------------------------- 00:19:05.051 Suppressions used: 00:19:05.051 count bytes template 00:19:05.051 1 7 /usr/src/fio/parse.c 00:19:05.051 25 2400 /usr/src/fio/iolog.c 00:19:05.051 1 8 libtcmalloc_minimal.so 00:19:05.051 1 904 libcrypto.so 00:19:05.052 ----------------------------------------------------- 00:19:05.052 00:19:05.052 00:19:05.052 real 0m12.760s 00:19:05.052 user 0m13.012s 00:19:05.052 sys 0m0.707s 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:05.052 ************************************ 00:19:05.052 END TEST bdev_fio_rw_verify 00:19:05.052 ************************************ 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "b541a26c-e60c-4eac-a4e5-b20a1b7de806"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b541a26c-e60c-4eac-a4e5-b20a1b7de806",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "b541a26c-e60c-4eac-a4e5-b20a1b7de806",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1b5b2f66-2da8-4fa7-a50b-bca3420df31e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "436d9005-5fa7-4560-a027-084a641ccee4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "fd033024-6a21-4a70-b96f-944175baa44b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:05.052 23:53:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:05.313 23:53:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:05.313 23:53:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:05.313 /home/vagrant/spdk_repo/spdk 00:19:05.313 ************************************ 00:19:05.313 END TEST bdev_fio 00:19:05.313 ************************************ 00:19:05.313 23:53:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:05.313 23:53:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:05.313 23:53:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:05.313 00:19:05.313 real 0m13.085s 00:19:05.313 user 0m13.131s 00:19:05.313 sys 0m0.875s 00:19:05.313 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.313 23:53:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:05.313 23:53:16 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:05.313 23:53:16 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:05.313 23:53:16 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:05.313 23:53:16 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.313 23:53:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:05.313 ************************************ 00:19:05.313 START TEST bdev_verify 00:19:05.313 ************************************ 00:19:05.313 23:53:16 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:05.313 [2024-12-06 23:53:16.833333] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:19:05.313 [2024-12-06 23:53:16.833447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90436 ] 00:19:05.572 [2024-12-06 23:53:17.007496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:05.573 [2024-12-06 23:53:17.122612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.573 [2024-12-06 23:53:17.122644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.141 Running I/O for 5 seconds... 00:19:08.465 10271.00 IOPS, 40.12 MiB/s [2024-12-06T23:53:20.968Z] 10281.00 IOPS, 40.16 MiB/s [2024-12-06T23:53:21.907Z] 10354.33 IOPS, 40.45 MiB/s [2024-12-06T23:53:22.848Z] 10366.75 IOPS, 40.50 MiB/s [2024-12-06T23:53:22.848Z] 10350.00 IOPS, 40.43 MiB/s 00:19:11.285 Latency(us) 00:19:11.285 [2024-12-06T23:53:22.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.285 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:11.285 Verification LBA range: start 0x0 length 0x2000 00:19:11.285 raid5f : 5.03 4207.57 16.44 0.00 0.00 45923.64 217.32 32510.43 00:19:11.285 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:11.285 Verification LBA range: start 0x2000 length 0x2000 00:19:11.285 raid5f : 5.02 6142.00 23.99 0.00 0.00 31432.74 208.38 23009.15 00:19:11.285 [2024-12-06T23:53:22.848Z] =================================================================================================================== 00:19:11.285 [2024-12-06T23:53:22.848Z] Total : 10349.57 40.43 0.00 0.00 37325.19 208.38 32510.43 00:19:12.668 ************************************ 00:19:12.668 END TEST bdev_verify 00:19:12.668 ************************************ 00:19:12.668 00:19:12.668 real 0m7.269s 00:19:12.668 user 0m13.434s 00:19:12.668 sys 0m0.279s 00:19:12.668 23:53:24 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.668 23:53:24 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:12.668 23:53:24 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:12.668 23:53:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:12.668 23:53:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.668 23:53:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:12.668 ************************************ 00:19:12.668 START TEST bdev_verify_big_io 00:19:12.668 ************************************ 00:19:12.668 23:53:24 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:12.668 [2024-12-06 23:53:24.178149] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:19:12.668 [2024-12-06 23:53:24.178262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90534 ] 00:19:13.048 [2024-12-06 23:53:24.357797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:13.048 [2024-12-06 23:53:24.465355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.048 [2024-12-06 23:53:24.465381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.694 Running I/O for 5 seconds... 00:19:15.571 633.00 IOPS, 39.56 MiB/s [2024-12-06T23:53:28.078Z] 728.50 IOPS, 45.53 MiB/s [2024-12-06T23:53:29.459Z] 739.67 IOPS, 46.23 MiB/s [2024-12-06T23:53:30.398Z] 745.25 IOPS, 46.58 MiB/s [2024-12-06T23:53:30.398Z] 761.60 IOPS, 47.60 MiB/s 00:19:18.835 Latency(us) 00:19:18.835 [2024-12-06T23:53:30.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.835 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:18.835 Verification LBA range: start 0x0 length 0x200 00:19:18.835 raid5f : 5.27 337.13 21.07 0.00 0.00 9459973.48 236.10 401114.66 00:19:18.835 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:18.835 Verification LBA range: start 0x200 length 0x200 00:19:18.835 raid5f : 5.24 436.03 27.25 0.00 0.00 7359871.63 232.52 318693.84 00:19:18.835 [2024-12-06T23:53:30.398Z] =================================================================================================================== 00:19:18.835 [2024-12-06T23:53:30.399Z] Total : 773.15 48.32 0.00 0.00 8278666.19 232.52 401114.66 00:19:20.215 00:19:20.215 real 0m7.553s 00:19:20.215 user 0m14.010s 00:19:20.215 sys 0m0.273s 00:19:20.215 23:53:31 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.215 23:53:31 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.215 ************************************ 00:19:20.215 END TEST bdev_verify_big_io 00:19:20.215 ************************************ 00:19:20.215 23:53:31 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:20.215 23:53:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:20.215 23:53:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.215 23:53:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:20.215 ************************************ 00:19:20.215 START TEST bdev_write_zeroes 00:19:20.215 ************************************ 00:19:20.215 23:53:31 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:20.474 [2024-12-06 23:53:31.805494] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:19:20.474 [2024-12-06 23:53:31.805630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90627 ] 00:19:20.474 [2024-12-06 23:53:31.973834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.733 [2024-12-06 23:53:32.084852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.302 Running I/O for 1 seconds... 00:19:22.240 29919.00 IOPS, 116.87 MiB/s 00:19:22.240 Latency(us) 00:19:22.240 [2024-12-06T23:53:33.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.240 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:22.240 raid5f : 1.01 29894.32 116.77 0.00 0.00 4269.09 1345.06 5809.52 00:19:22.240 [2024-12-06T23:53:33.803Z] =================================================================================================================== 00:19:22.240 [2024-12-06T23:53:33.803Z] Total : 29894.32 116.77 0.00 0.00 4269.09 1345.06 5809.52 00:19:23.622 00:19:23.622 real 0m3.249s 00:19:23.622 user 0m2.871s 00:19:23.622 sys 0m0.250s 00:19:23.622 23:53:34 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.622 23:53:34 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:23.622 ************************************ 00:19:23.622 END TEST bdev_write_zeroes 00:19:23.622 ************************************ 00:19:23.622 23:53:35 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:23.622 23:53:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:23.622 23:53:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.622 23:53:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.622 ************************************ 00:19:23.622 START TEST bdev_json_nonenclosed 00:19:23.622 ************************************ 00:19:23.622 23:53:35 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:23.622 [2024-12-06 23:53:35.133168] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:19:23.622 [2024-12-06 23:53:35.133370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90680 ] 00:19:23.881 [2024-12-06 23:53:35.312167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.881 [2024-12-06 23:53:35.425073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.881 [2024-12-06 23:53:35.425241] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:23.881 [2024-12-06 23:53:35.425311] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:23.881 [2024-12-06 23:53:35.425349] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:24.139 00:19:24.139 real 0m0.634s 00:19:24.139 user 0m0.387s 00:19:24.139 sys 0m0.142s 00:19:24.139 ************************************ 00:19:24.139 END TEST bdev_json_nonenclosed 00:19:24.139 ************************************ 00:19:24.139 23:53:35 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.139 23:53:35 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:24.399 23:53:35 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:24.399 23:53:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:24.399 23:53:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.399 23:53:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:24.399 ************************************ 00:19:24.399 START TEST bdev_json_nonarray 00:19:24.399 ************************************ 00:19:24.399 23:53:35 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:24.399 [2024-12-06 23:53:35.829734] Starting SPDK v25.01-pre git sha1 dd2b3744d / DPDK 24.03.0 initialization... 00:19:24.399 [2024-12-06 23:53:35.829840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90711 ] 00:19:24.658 [2024-12-06 23:53:36.003210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.658 [2024-12-06 23:53:36.106517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.658 [2024-12-06 23:53:36.106613] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:24.659 [2024-12-06 23:53:36.106630] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:24.659 [2024-12-06 23:53:36.106647] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:24.918 00:19:24.918 real 0m0.603s 00:19:24.918 user 0m0.370s 00:19:24.918 sys 0m0.130s 00:19:24.918 ************************************ 00:19:24.918 END TEST bdev_json_nonarray 00:19:24.918 ************************************ 00:19:24.918 23:53:36 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.918 23:53:36 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:24.918 23:53:36 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:19:24.918 23:53:36 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:19:24.918 23:53:36 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:19:24.918 23:53:36 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:24.918 23:53:36 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:19:24.918 23:53:36 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:24.918 23:53:36 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:24.918 23:53:36 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:24.918 23:53:36 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:24.918 23:53:36 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:24.918 23:53:36 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:24.918 ************************************ 00:19:24.918 END TEST blockdev_raid5f 00:19:24.918 ************************************ 00:19:24.918 00:19:24.918 real 0m47.721s 00:19:24.918 user 1m4.457s 00:19:24.918 sys 0m5.011s 00:19:24.918 23:53:36 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.918 23:53:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:24.918 23:53:36 -- spdk/autotest.sh@194 -- # uname -s 00:19:25.178 23:53:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:25.178 23:53:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:25.178 23:53:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:25.178 23:53:36 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:25.178 23:53:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.178 23:53:36 -- common/autotest_common.sh@10 -- # set +x 00:19:25.178 23:53:36 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:25.178 23:53:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:25.178 23:53:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:25.178 23:53:36 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:25.178 23:53:36 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:25.178 23:53:36 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:25.178 23:53:36 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:25.178 23:53:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.178 23:53:36 -- common/autotest_common.sh@10 -- # set +x 00:19:25.178 23:53:36 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:25.178 23:53:36 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:25.178 23:53:36 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:25.178 23:53:36 -- common/autotest_common.sh@10 -- # set +x 00:19:27.722 INFO: APP EXITING 00:19:27.722 INFO: killing all VMs 00:19:27.722 INFO: killing vhost app 00:19:27.722 INFO: EXIT DONE 00:19:27.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:27.982 Waiting for block devices as requested 00:19:27.982 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:28.242 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:29.183 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:29.183 Cleaning 00:19:29.183 Removing: /var/run/dpdk/spdk0/config 00:19:29.183 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:29.183 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:29.183 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:29.183 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:29.183 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:29.183 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:29.183 Removing: /dev/shm/spdk_tgt_trace.pid56893 00:19:29.183 Removing: /var/run/dpdk/spdk0 00:19:29.183 Removing: /var/run/dpdk/spdk_pid56653 00:19:29.183 Removing: /var/run/dpdk/spdk_pid56893 00:19:29.183 Removing: /var/run/dpdk/spdk_pid57128 00:19:29.183 Removing: /var/run/dpdk/spdk_pid57232 00:19:29.183 Removing: /var/run/dpdk/spdk_pid57288 00:19:29.183 Removing: /var/run/dpdk/spdk_pid57416 00:19:29.183 Removing: /var/run/dpdk/spdk_pid57440 00:19:29.183 Removing: /var/run/dpdk/spdk_pid57650 00:19:29.183 Removing: /var/run/dpdk/spdk_pid57761 00:19:29.183 Removing: /var/run/dpdk/spdk_pid57868 00:19:29.183 Removing: /var/run/dpdk/spdk_pid57997 00:19:29.183 Removing: /var/run/dpdk/spdk_pid58110 00:19:29.183 Removing: /var/run/dpdk/spdk_pid58150 00:19:29.183 Removing: /var/run/dpdk/spdk_pid58192 00:19:29.183 Removing: /var/run/dpdk/spdk_pid58262 00:19:29.183 Removing: /var/run/dpdk/spdk_pid58357 00:19:29.183 Removing: /var/run/dpdk/spdk_pid58806 00:19:29.183 Removing: /var/run/dpdk/spdk_pid58875 00:19:29.183 Removing: /var/run/dpdk/spdk_pid58955 00:19:29.183 Removing: /var/run/dpdk/spdk_pid58971 00:19:29.183 Removing: /var/run/dpdk/spdk_pid59121 00:19:29.183 Removing: /var/run/dpdk/spdk_pid59137 00:19:29.183 Removing: /var/run/dpdk/spdk_pid59285 00:19:29.183 Removing: /var/run/dpdk/spdk_pid59306 00:19:29.183 Removing: /var/run/dpdk/spdk_pid59370 00:19:29.183 Removing: /var/run/dpdk/spdk_pid59388 00:19:29.183 Removing: /var/run/dpdk/spdk_pid59458 00:19:29.183 Removing: /var/run/dpdk/spdk_pid59476 00:19:29.183 Removing: /var/run/dpdk/spdk_pid59671 00:19:29.183 Removing: /var/run/dpdk/spdk_pid59713 00:19:29.183 Removing: /var/run/dpdk/spdk_pid59802 00:19:29.183 Removing: /var/run/dpdk/spdk_pid61139 00:19:29.183 Removing: /var/run/dpdk/spdk_pid61345 00:19:29.444 Removing: /var/run/dpdk/spdk_pid61489 00:19:29.444 Removing: /var/run/dpdk/spdk_pid62128 00:19:29.444 Removing: /var/run/dpdk/spdk_pid62340 00:19:29.444 Removing: /var/run/dpdk/spdk_pid62480 00:19:29.444 Removing: /var/run/dpdk/spdk_pid63123 00:19:29.444 Removing: /var/run/dpdk/spdk_pid63453 00:19:29.444 Removing: /var/run/dpdk/spdk_pid63599 00:19:29.444 Removing: /var/run/dpdk/spdk_pid64978 00:19:29.444 Removing: /var/run/dpdk/spdk_pid65237 00:19:29.444 Removing: /var/run/dpdk/spdk_pid65377 00:19:29.444 Removing: /var/run/dpdk/spdk_pid66762 00:19:29.444 Removing: /var/run/dpdk/spdk_pid67021 00:19:29.444 Removing: /var/run/dpdk/spdk_pid67161 00:19:29.444 Removing: /var/run/dpdk/spdk_pid68560 00:19:29.444 Removing: /var/run/dpdk/spdk_pid69006 00:19:29.444 Removing: /var/run/dpdk/spdk_pid69146 00:19:29.444 Removing: /var/run/dpdk/spdk_pid70638 00:19:29.444 Removing: /var/run/dpdk/spdk_pid70897 00:19:29.444 Removing: /var/run/dpdk/spdk_pid71048 00:19:29.444 Removing: /var/run/dpdk/spdk_pid72545 00:19:29.444 Removing: /var/run/dpdk/spdk_pid72804 00:19:29.444 Removing: /var/run/dpdk/spdk_pid72955 00:19:29.444 Removing: /var/run/dpdk/spdk_pid74449 00:19:29.444 Removing: /var/run/dpdk/spdk_pid74938 00:19:29.444 Removing: /var/run/dpdk/spdk_pid75084 00:19:29.444 Removing: /var/run/dpdk/spdk_pid75228 00:19:29.444 Removing: /var/run/dpdk/spdk_pid75651 00:19:29.444 Removing: /var/run/dpdk/spdk_pid76381 00:19:29.444 Removing: /var/run/dpdk/spdk_pid76776 00:19:29.444 Removing: /var/run/dpdk/spdk_pid77478 00:19:29.444 Removing: /var/run/dpdk/spdk_pid77919 00:19:29.444 Removing: /var/run/dpdk/spdk_pid78667 00:19:29.444 Removing: /var/run/dpdk/spdk_pid79078 00:19:29.444 Removing: /var/run/dpdk/spdk_pid81043 00:19:29.444 Removing: /var/run/dpdk/spdk_pid81485 00:19:29.444 Removing: /var/run/dpdk/spdk_pid81926 00:19:29.444 Removing: /var/run/dpdk/spdk_pid84020 00:19:29.444 Removing: /var/run/dpdk/spdk_pid84507 00:19:29.444 Removing: /var/run/dpdk/spdk_pid85029 00:19:29.444 Removing: /var/run/dpdk/spdk_pid86080 00:19:29.444 Removing: /var/run/dpdk/spdk_pid86403 00:19:29.444 Removing: /var/run/dpdk/spdk_pid87349 00:19:29.444 Removing: /var/run/dpdk/spdk_pid87676 00:19:29.444 Removing: /var/run/dpdk/spdk_pid88610 00:19:29.444 Removing: /var/run/dpdk/spdk_pid88937 00:19:29.444 Removing: /var/run/dpdk/spdk_pid89620 00:19:29.444 Removing: /var/run/dpdk/spdk_pid89904 00:19:29.444 Removing: /var/run/dpdk/spdk_pid89968 00:19:29.444 Removing: /var/run/dpdk/spdk_pid90008 00:19:29.444 Removing: /var/run/dpdk/spdk_pid90257 00:19:29.444 Removing: /var/run/dpdk/spdk_pid90436 00:19:29.444 Removing: /var/run/dpdk/spdk_pid90534 00:19:29.444 Removing: /var/run/dpdk/spdk_pid90627 00:19:29.444 Removing: /var/run/dpdk/spdk_pid90680 00:19:29.704 Removing: /var/run/dpdk/spdk_pid90711 00:19:29.704 Clean 00:19:29.704 23:53:41 -- common/autotest_common.sh@1453 -- # return 0 00:19:29.704 23:53:41 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:29.704 23:53:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.704 23:53:41 -- common/autotest_common.sh@10 -- # set +x 00:19:29.704 23:53:41 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:29.704 23:53:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.704 23:53:41 -- common/autotest_common.sh@10 -- # set +x 00:19:29.704 23:53:41 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:29.704 23:53:41 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:29.705 23:53:41 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:29.705 23:53:41 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:29.705 23:53:41 -- spdk/autotest.sh@398 -- # hostname 00:19:29.705 23:53:41 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:29.965 geninfo: WARNING: invalid characters removed from testname! 00:19:56.530 23:54:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:56.530 23:54:07 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:58.440 23:54:09 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:00.979 23:54:11 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:02.888 23:54:14 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:05.423 23:54:16 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:07.966 23:54:18 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:07.966 23:54:18 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:07.966 23:54:18 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:07.966 23:54:18 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:07.966 23:54:18 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:07.966 23:54:18 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:07.966 + [[ -n 5425 ]] 00:20:07.966 + sudo kill 5425 00:20:07.977 [Pipeline] } 00:20:07.993 [Pipeline] // timeout 00:20:07.998 [Pipeline] } 00:20:08.015 [Pipeline] // stage 00:20:08.022 [Pipeline] } 00:20:08.038 [Pipeline] // catchError 00:20:08.048 [Pipeline] stage 00:20:08.051 [Pipeline] { (Stop VM) 00:20:08.064 [Pipeline] sh 00:20:08.349 + vagrant halt 00:20:10.261 ==> default: Halting domain... 00:20:18.408 [Pipeline] sh 00:20:18.690 + vagrant destroy -f 00:20:20.695 ==> default: Removing domain... 00:20:20.989 [Pipeline] sh 00:20:21.275 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:20:21.285 [Pipeline] } 00:20:21.302 [Pipeline] // stage 00:20:21.307 [Pipeline] } 00:20:21.322 [Pipeline] // dir 00:20:21.327 [Pipeline] } 00:20:21.343 [Pipeline] // wrap 00:20:21.349 [Pipeline] } 00:20:21.362 [Pipeline] // catchError 00:20:21.372 [Pipeline] stage 00:20:21.374 [Pipeline] { (Epilogue) 00:20:21.388 [Pipeline] sh 00:20:21.674 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:25.885 [Pipeline] catchError 00:20:25.887 [Pipeline] { 00:20:25.897 [Pipeline] sh 00:20:26.242 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:26.242 Artifacts sizes are good 00:20:26.251 [Pipeline] } 00:20:26.264 [Pipeline] // catchError 00:20:26.276 [Pipeline] archiveArtifacts 00:20:26.282 Archiving artifacts 00:20:26.378 [Pipeline] cleanWs 00:20:26.388 [WS-CLEANUP] Deleting project workspace... 00:20:26.389 [WS-CLEANUP] Deferred wipeout is used... 00:20:26.394 [WS-CLEANUP] done 00:20:26.396 [Pipeline] } 00:20:26.408 [Pipeline] // stage 00:20:26.412 [Pipeline] } 00:20:26.427 [Pipeline] // node 00:20:26.431 [Pipeline] End of Pipeline 00:20:26.472 Finished: SUCCESS